<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www2.physics.siu.edu/qunet/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Anada</id>
	<title>Qunet - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www2.physics.siu.edu/qunet/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Anada"/>
	<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php/Special:Contributions/Anada"/>
	<updated>2026-04-29T19:06:46Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.7</generator>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Quantum_Computation_and_Quantum_Error_Prevention&amp;diff=2334</id>
		<title>Quantum Computation and Quantum Error Prevention</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Quantum_Computation_and_Quantum_Error_Prevention&amp;diff=2334"/>
		<updated>2013-01-22T05:42:09Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Table of Contents: Part I -- The Basics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;QUNET IS FREE TO READ!  If you would like to ''contribute'' to the qunet wiki book, in other words, if you want to edit QUNET, you must have an account.   Click [https://qunet.physics.siu.edu/submit/requestform.html here] to request one.  &lt;br /&gt;
&lt;br /&gt;
NOTICE -- This site is incomplete.  There is no doubt that it has small mistakes and typos and the citations are far from complete. Also, this is a living document so a final form may never exist.  Please email errors, typos, corrections and/or comments to Mark Byrd.&lt;br /&gt;
&lt;br /&gt;
Please see the [[Appendix G - NOTES and CREDITS|Notes and Credits]] for further information and a list of contributors.  &lt;br /&gt;
&lt;br /&gt;
===Table of Contents: Part I -- The Basics===&lt;br /&gt;
&lt;br /&gt;
:&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;big&amp;gt;[[Preface]]&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 1 - Introduction]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 1 - Introduction#Introduction|Introduction]]&lt;br /&gt;
##[[Chapter 1 - Introduction#An Introduction to Quantum Computation|An Introduction to Quantum Computation]]&lt;br /&gt;
##[[Chapter 1 - Introduction#Bits and qubits: An Introduction|Bits and qubits: An Introduction]]&lt;br /&gt;
##[[Chapter 1 - Introduction#Obstacles to Building a Reliable Quantum Computer|Obstacles to Building a Reliable Quantum Computer]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 2 - Qubits and Collections of Qubits]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 2 - Qubits and Collections of Qubits#Introduction|Introduction]]&lt;br /&gt;
##[[Chapter 2 - Qubits and Collections of Qubits#Qubit States|Qubit States]]&lt;br /&gt;
##[[Chapter 2 - Qubits and Collections of Qubits#Qubit Gates|Qubit Gates]]&lt;br /&gt;
##[[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|The Pauli Matrices]]&lt;br /&gt;
##[[Chapter 2 - Qubits and Collections of Qubits#States of Many Qubits|States of Many Qubits]]&lt;br /&gt;
##[[Chapter 2 - Qubits and Collections of Qubits#Quantum Gates for Many Qubits|Quantum Gates for Many Qubits]]&lt;br /&gt;
##[[Chapter 2 - Qubits and Collections of Qubits#Measurement|Measurement]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 3 - Physics of Quantum Information]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 3 - Physics of Quantum Information#Introduction|Introduction]]&lt;br /&gt;
##[[Chapter 3 - Physics of Quantum Information#Schrodinger.27s_Equation|Schrodinger’s Equation]]&lt;br /&gt;
##[[Chapter 3 - Physics of Quantum Information#Density Matrix for Pure States|Density Matrix for Pure States]]&lt;br /&gt;
##[[Chapter 3 - Physics of Quantum Information#Measurements Revisited|Measurements Revisited]]&lt;br /&gt;
##[[Chapter 3 - Physics of Quantum Information#Density Matrix for Mixed States|Density Matrix for a Mixed State]]&lt;br /&gt;
##[[Chapter 3 - Physics of Quantum Information#Expectation Values|Expectation Values]]&lt;br /&gt;
&amp;lt;!--##[[Chapter 3 - Physics of Quantum Information#Types of Two-state Systems|Types of Two-state Systems]]--&amp;gt;&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 4 - Entanglement]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 4 - Entanglement#Introduction|Introduction]]&lt;br /&gt;
##[[Chapter 4 - Entanglement#Entangled Pure States|Entangled Pure States]]&lt;br /&gt;
##[[Chapter 4 - Entanglement#Entangled Mixed States|Entangled Mixed States]]&lt;br /&gt;
##[[Chapter 4 - Entanglement#Extensions and Open Problems|Extensions and Open Problems]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 5 - Quantum Information: Basics and Simple Examples]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 5 - Quantum Information: Basics and Simple Examples#Introduction|Introduction]]&lt;br /&gt;
##[[Chapter 5 - Quantum Information: Basics and Simple Examples#No Cloning!|No Cloning!]]&lt;br /&gt;
##[[Chapter 5 - Quantum Information: Basics and Simple Examples#Uncertainty Principle|Uncertainty Principle]]&lt;br /&gt;
##[[Chapter 5 - Quantum Information: Basics and Simple Examples#Quantum Dense Coding|Quantum Dense Coding]]&lt;br /&gt;
##[[Chapter 5 - Quantum Information: Basics and Simple Examples#Teleporting a Quantum State|Teleporting a Quantum State]]&lt;br /&gt;
##[[Chapter 5 - Quantum Information: Basics and Simple Examples#QKD: BB84|QKD: BB84]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 6 - Noise in Quantum Systems]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 6 - Noise in Quantum Systems#Introduction|Introduction]]&lt;br /&gt;
##[[Chapter 6 - Noise in Quantum Systems#SMR Representation or Operator-Sum Representation|SMR Representation or Operator-Sum Representation]]&lt;br /&gt;
##[[Chapter 6 - Noise in Quantum Systems#Modelling Open System Evolution|Modelling Open System Evolution]]&lt;br /&gt;
##[[Chapter 6 - Noise in Quantum Systems#Unitary Degree of Freedom in the OSR|Unitary Degree of Freedom in the OSR]]&lt;br /&gt;
##[[Chapter 6 - Noise in Quantum Systems#Examples|Examples]]&lt;br /&gt;
##[[Chapter 6 - Noise in Quantum Systems#Notes|Notes]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 7 - Quantum Error Correcting Codes]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 7 - Quantum Error Correcting Codes#Introduction|Introduction]]&lt;br /&gt;
##[[Chapter 7 - Quantum Error Correcting Codes#Shor's Nine-Qubit Quantum Error Correcting Code|Shor's Nine-Qubit Quantum Error Correcting Code]]&lt;br /&gt;
##[[Chapter 7 - Quantum Error Correcting Codes#Quantum Error Correcting Codes: General Properties|Quantum Error Correcting Codes: General Properties]]&lt;br /&gt;
##[[Chapter 7 - Quantum Error Correcting Codes#Stabilizer Codes|Stabilizer Codes]]&lt;br /&gt;
##[[Chapter 7 - Quantum Error Correcting Codes#CSS Codes|CSS codes]]&lt;br /&gt;
##[[Chapter 7 - Quantum Error Correcting Codes#Steane's Seven Qubit Code|Steane's Seven Qubit Code]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 8 - Decoherence-Free/Noiseless Subsystems]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#Introduction|Introduction]]&lt;br /&gt;
##[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#General Considerations|General Considerations]]&lt;br /&gt;
##[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#DNS Examples|DNS Examples]]&lt;br /&gt;
##[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#Quantum Computing on a DNS|Quantum Computing on a DNS]]&lt;br /&gt;
##[[Chapter 8 - Decoherence-Free/Noiseless Subsystems#QC Examples|QC Examples]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 9 - Dynamical Decoupling Controls]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 9 - Dynamical Decoupling Controls#Introduction|Introduction]]&lt;br /&gt;
##[[Chapter 9 - Dynamical Decoupling Controls#General Conditions|General Conditions]]&lt;br /&gt;
##[[Chapter 9 - Dynamical Decoupling Controls#The Magnus Expansion|The Magnus Expansion]]&lt;br /&gt;
##[[Chapter 9 - Dynamical Decoupling Controls#First-Order Theory|First-Order Theory]]&lt;br /&gt;
##[[Chapter 9 - Dynamical Decoupling Controls#The Single Qubit Case|The Single Qubit Case]]&lt;br /&gt;
##[[Chapter 9 - Dynamical Decoupling Controls#Extensions|Extensions]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 10 - Fault-Tolerant Quantum Computing|Chapter 10 - Fault-Tolerant Quantum Computing]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 10 - Fault-Tolerant Quantum Computing|Introduction]]&lt;br /&gt;
##[[Chapter 10 - Fault-Tolerant Quantum Computing|Requirements for Fault-Tolerance]]&lt;br /&gt;
##[[Chapter 10 - Fault-Tolerant Quantum Computing|Concatenated Codes]]&lt;br /&gt;
##[[Chapter 10 - Fault-Tolerant Quantum Computing|Fault-Tolerant Quantum Computing for the Steane Code]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 11 - Hybrid Methods of Quantum Error Prevention]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 11 - Hybrid Methods of Quantum Error Prevention|Introduction]]&lt;br /&gt;
##[[Chapter 11 - Hybrid Methods of Quantum Error Prevention|General Principles for Combining Error Prevention Methods]]&lt;br /&gt;
##[[Chapter 11 - Hybrid Methods of Quantum Error Prevention|Examples of Hybrid Error Prevention]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 12 - Conclusions and Further Study]]&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Table of Contents: Part II -- Advanced Topics===&lt;br /&gt;
&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Chapter 13 - Topological Quantum Error Correction]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Chapter 13 - Topological Quantum Error Correction|Introduction]]&lt;br /&gt;
##[[Chapter 13 - Topological Quantum Error Correction|The Surface Code]]&lt;br /&gt;
&lt;br /&gt;
===Appendices===&lt;br /&gt;
&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Appendix A - Basic Probability Concepts]]&amp;lt;/big&amp;gt;&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Appendix B - Complex Numbers]]&amp;lt;/big&amp;gt;&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Appendix C - Vectors and Linear Algebra]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Appendix C - Vectors and Linear Algebra#Introduction|Introduction]]&lt;br /&gt;
##[[Appendix C - Vectors and Linear Algebra#Vectors|Vectors]]&lt;br /&gt;
##[[Appendix C - Vectors and Linear Algebra#Linear Algebra: Matrices|Linear Algebra: Matrices]]&lt;br /&gt;
##[[Appendix C - Vectors and Linear Algebra#More Dirac Notation|More Dirac Notation]]&lt;br /&gt;
##[[Appendix C - Vectors and Linear Algebra#Transformations|Transformations]]&lt;br /&gt;
##[[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|Eigenvalues and Eigenvectors]]&lt;br /&gt;
##[[Appendix C - Vectors and Linear Algebra#Tensor Products|Tensor Products]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Appendix D - Group Theory]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Appendix D - Group Theory#Introduction|Introduction]]&lt;br /&gt;
##[[Appendix D - Group Theory#Definitions and Examples|Definitions and Examples]]&lt;br /&gt;
##[[Appendix D - Group Theory#Comparing Groups: Homomorphisms and Isomorphisms|Comparing Groups: Homomorphisms and Isomorphisms]]&lt;br /&gt;
##[[Appendix D - Group Theory#Discussion|Discussion]]&lt;br /&gt;
##[[Appendix D - Group Theory#Applications to Physics|Applications to Physics]]&lt;br /&gt;
##[[Appendix D - Group Theory#A Little Representation Theory|A Little Representation Theory]]&lt;br /&gt;
##[[Appendix D - Group Theory#Infinite Order Groups: Lie Groups|Infinite Order Groups: Lie Groups]]&lt;br /&gt;
##[[Appendix D - Group Theory#More Representation Theory|More Representation Theory]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Appendix E - Density Operator: Extensions]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Appendix E - Density Operator: Extensions#Introduction|Introduction]]&lt;br /&gt;
##[[Appendix E - Density Operator: Extensions#An N-dimensional Generalization of the Polarization Vector|An N-dimensional Generalization of the Polarization Vector]]&lt;br /&gt;
##[[Appendix E - Density Operator: Extensions#The Density Matrix for Two Qubits|The Density Matrix for Two Qubits]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Appendix F - Classical Error Correcting Codes]]&amp;lt;/big&amp;gt;&lt;br /&gt;
##[[Appendix F - Classical Error Correcting Codes#Introduction|Introduction]]&lt;br /&gt;
##[[Appendix F - Classical Error Correcting Codes#Binary Operations|Binary Operations]]&lt;br /&gt;
##[[Appendix F - Classical Error Correcting Codes#Definitions and Basics|Definitions and Basics]]&lt;br /&gt;
##[[Appendix F - Classical Error Correcting Codes#Linear Codes|Linear Codes]]&lt;br /&gt;
##[[Appendix F - Classical Error Correcting Codes#Errors|Errors]]&lt;br /&gt;
##[[Appendix F - Classical Error Correcting Codes#The Disjointness Condition and Correcting Errors|The Disjointness Condition and Correcting Errors]]&lt;br /&gt;
##[[Appendix F - Classical Error Correcting Codes#The Hamming Bound|The Hamming Bound]]&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Appendix G - NOTES and CREDITS]]&amp;lt;/big&amp;gt;&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Extensions]]&amp;lt;/big&amp;gt;&lt;br /&gt;
#&amp;lt;big&amp;gt;[[Testing]]&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Glossary===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;[[Glossary]]&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Index===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;[[Index]]&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Bibliography===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;[[Bibliography]]&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Notation===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;[[Notation]]&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
''Much of this material is based upon work supported by the National Science Foundation under Grant No. 0545798.  However, any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.''&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_13_-_Topological_Quantum_Error_Correction&amp;diff=1933</id>
		<title>Chapter 13 - Topological Quantum Error Correction</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_13_-_Topological_Quantum_Error_Correction&amp;diff=1933"/>
		<updated>2012-11-11T06:25:57Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Syndrome Extraction and Error Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Surface Code ==&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
Surface codes  are topological quantum error-correcting codes in which we can think of qubits being arranged on a 2-D lattice of qubits with only nearest neighbor interactions. This in practice may prove to be a very useful feature, since for many systems interacting qubits that are close to each other is substantially less difficult than ones that are further apart. We can think of physical qubits as being arranged on the edges of a lattice as shown in Figure 1. An example of the surface code are toric code and  planar code, the main difference between both of them is the boundary condition. In the toric code, the boundaries  are periodic whereas  in the case of the planar code, the boundaries are not periodic. In the toric code, the qubits are arranged on a lattice which can be thought of as spread over a surface of a torus, and in a planar code case we think of the data qubits as living on a simple 2-D plane and ancilla qubits on the faces and the intersections.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:lattice.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 1'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;A two-dimensional array implementation of the surface code. Data qubits are open circles, measurements (ancilla) qubits are filled circles.&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;The yellow area is to  measure-Z qubits while the green area is to  measure-X qubits.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The stabilizer generators for the surface code are the tensor products of ''Z'' on the four data qubits around each face, and the tensor products of ''X'' on&lt;br /&gt;
the four data qubits around each intersection. Neighbouring stabilizers share two data qubits ensuring that adjacent ''X'' and ''Z'' stabilizers commute. The qubit ''Z'' eigenstate are called the ground state &amp;lt;math&amp;gt;\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;  and the excited state &amp;lt;math&amp;gt;\left\vert{e}\right\rangle&amp;lt;/math&amp;gt;. The ground state is the ''+1'' eigenstate of ''Z'', with &amp;lt;math&amp;gt;Z\left\vert{g}\right\rangle=+\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;, and the excited state is the ''-1'' eigenstate, with &amp;lt;math&amp;gt;Z\left\vert{e}\right\rangle=-\left\vert{e}\right\rangle&amp;lt;/math&amp;gt;. It is tempting to think of the qubit as a kind of quantum transistor, with the ground state corresponding to &amp;quot;off&amp;quot; and the excited state to &amp;quot;on&amp;quot;. However, in distinct contrast to a classical logical element, a qubit can exist in a superposition of its eigenstate, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle=\alpha\left\vert{g}\right\rangle+\beta\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;, so a qubit can be both &amp;quot;off&amp;quot; and on&amp;quot; at the same time. A measurment &amp;lt;math&amp;gt;M_Z&amp;lt;/math&amp;gt; of the qubit will however return only one of two possible measurement outcomes,''+1'' with the qubit state projected to &amp;lt;math&amp;gt;\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;, or ''-1'' with the qubit state projected to &amp;lt;math&amp;gt;\left\vert{e}\right\rangle&amp;lt;/math&amp;gt; .&lt;br /&gt;
&lt;br /&gt;
A planar code has four boundaries, two that are called “smooth” and two that are called “rough”. Smooth boundaries have four-term ''Z'' stabilizer generators, and three-term ''X'' stabilizer generators, whereas rough boundaries have four-term ''X'' stabilizer generators and three-term ''Z'' stabilizer generators. A planar code, with two rough and two smooth boundaries can encode a single logical qubit (as in Figure 2). Also look at http://arxiv.org/abs/1208.0928 it is an excellent reference as it represents a comprehensive review of the surface code, also written for the absolute beginner.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:boundaries.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 2'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;Examples of smooth and rough boundaries. This figure has been copied with a permission from the authors of Ref.2&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Syndrome Extraction and Error Detection===&lt;br /&gt;
Detecting errors involves measuring check operators, and observing which ones give a value of ''-1'' (due to anti-commuting with errors). This information helps us guess where errors occurred. In practice, of course, errors do not have to occur on their own, and often one can observe multiple instances next to each other. In these cases, the error operators form error chains throughout the lattice. Since only&lt;br /&gt;
the ends of such error chains anti-commute with the check operators, determining where errors occurred often involves guessing the most likely scenario. In the planar case, the chains  connect opposite boundaries of the same type (either left to right, or top to bottom), and&lt;br /&gt;
in the toric case, chains that span all the way across a given dimension of the lattice.  They turn out to the change the encoded, logical&lt;br /&gt;
state of the qubit and hence are called logical errors. Two examples are shown Figures 3 and 4. &amp;lt;math&amp;gt;Z_L&amp;lt;/math&amp;gt; is a chain of ''Z'' operators that connects two rough boundaries, and &amp;lt;math&amp;gt;X_L&amp;lt;/math&amp;gt; chain of ''X'' operators that connects two smooth ones.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:defect.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 3'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;Examples of error syndromes on the Surface code (planar and toric). The state is initialized to the ''+1'' eigenstate of all stabilizers.&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;Shaded qubits indicate locations of ''X'' errors. This figure has been copied with a permission from the authors of Ref.5&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:error.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 4'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;A planar surface code in which a logical ''Z (X)'' error is a chain of ''Z (X)''&lt;br /&gt;
operators that spans the whole lattice, and connects rough (smooth) boundaries.This figure has been copied with a permission from the authors of Ref.5&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_13_-_Topological_Quantum_Error_Correction&amp;diff=1932</id>
		<title>Chapter 13 - Topological Quantum Error Correction</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_13_-_Topological_Quantum_Error_Correction&amp;diff=1932"/>
		<updated>2012-11-11T06:25:26Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Syndrome Extraction and Error Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Surface Code ==&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
Surface codes  are topological quantum error-correcting codes in which we can think of qubits being arranged on a 2-D lattice of qubits with only nearest neighbor interactions. This in practice may prove to be a very useful feature, since for many systems interacting qubits that are close to each other is substantially less difficult than ones that are further apart. We can think of physical qubits as being arranged on the edges of a lattice as shown in Figure 1. An example of the surface code are toric code and  planar code, the main difference between both of them is the boundary condition. In the toric code, the boundaries  are periodic whereas  in the case of the planar code, the boundaries are not periodic. In the toric code, the qubits are arranged on a lattice which can be thought of as spread over a surface of a torus, and in a planar code case we think of the data qubits as living on a simple 2-D plane and ancilla qubits on the faces and the intersections.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:lattice.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 1'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;A two-dimensional array implementation of the surface code. Data qubits are open circles, measurements (ancilla) qubits are filled circles.&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;The yellow area is to  measure-Z qubits while the green area is to  measure-X qubits.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The stabilizer generators for the surface code are the tensor products of ''Z'' on the four data qubits around each face, and the tensor products of ''X'' on&lt;br /&gt;
the four data qubits around each intersection. Neighbouring stabilizers share two data qubits ensuring that adjacent ''X'' and ''Z'' stabilizers commute. The qubit ''Z'' eigenstate are called the ground state &amp;lt;math&amp;gt;\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;  and the excited state &amp;lt;math&amp;gt;\left\vert{e}\right\rangle&amp;lt;/math&amp;gt;. The ground state is the ''+1'' eigenstate of ''Z'', with &amp;lt;math&amp;gt;Z\left\vert{g}\right\rangle=+\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;, and the excited state is the ''-1'' eigenstate, with &amp;lt;math&amp;gt;Z\left\vert{e}\right\rangle=-\left\vert{e}\right\rangle&amp;lt;/math&amp;gt;. It is tempting to think of the qubit as a kind of quantum transistor, with the ground state corresponding to &amp;quot;off&amp;quot; and the excited state to &amp;quot;on&amp;quot;. However, in distinct contrast to a classical logical element, a qubit can exist in a superposition of its eigenstate, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle=\alpha\left\vert{g}\right\rangle+\beta\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;, so a qubit can be both &amp;quot;off&amp;quot; and on&amp;quot; at the same time. A measurment &amp;lt;math&amp;gt;M_Z&amp;lt;/math&amp;gt; of the qubit will however return only one of two possible measurement outcomes,''+1'' with the qubit state projected to &amp;lt;math&amp;gt;\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;, or ''-1'' with the qubit state projected to &amp;lt;math&amp;gt;\left\vert{e}\right\rangle&amp;lt;/math&amp;gt; .&lt;br /&gt;
&lt;br /&gt;
A planar code has four boundaries, two that are called “smooth” and two that are called “rough”. Smooth boundaries have four-term ''Z'' stabilizer generators, and three-term ''X'' stabilizer generators, whereas rough boundaries have four-term ''X'' stabilizer generators and three-term ''Z'' stabilizer generators. A planar code, with two rough and two smooth boundaries can encode a single logical qubit (as in Figure 2). Also look at http://arxiv.org/abs/1208.0928 it is an excellent reference as it represents a comprehensive review of the surface code, also written for the absolute beginner.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:boundaries.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 2'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;Examples of smooth and rough boundaries. This figure has been copied with a permission from the authors of Ref.2&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Syndrome Extraction and Error Detection===&lt;br /&gt;
Detecting errors involves measuring check operators, and observing which ones give a value of ''-1'' (due to anti-commuting with errors). This information helps us guess where errors occurred. In practice, of course, errors do not have to occur on their own, and often one can observe multiple instances next to each other. In these cases, the error operators form error chains throughout the lattice. Since only&lt;br /&gt;
the ends of such error chains anti-commute with the check operators, determining where errors occurred often involves guessing the most likely scenario. In the planar case, the chains  connect opposite boundaries of the same type (either left to right, or top to bottom), and&lt;br /&gt;
in the toric case, chains that span all the way across a given dimension of the lattice.  They turn out to the change the encoded, logical&lt;br /&gt;
state of the qubit and hence are called logical errors. Two examples are shown Figures 3 and 4. &amp;lt;math&amp;gt;Z_L&amp;lt;/math&amp;gt; is a chain of ''Z'' operators that connects two rough boundaries, and &amp;lt;math&amp;gt;X_L&amp;lt;/math&amp;gt; chain of ''X'' operators that connects two smooth ones.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:defect.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 3'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;Examples of error syndromes on the Surface code (planar and toric). The state is initialized to the ''+1'' eigenstate of all stabilizers.&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;Shaded qubits indicate locations of ''X'' errors. This figure has been copied with a permission from the authors of Ref.5&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:error.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 4'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;A planar surface code in which a logical ''Z (X)'' error is a chain of ''Z (X)''&lt;br /&gt;
operators that spans the whole lattice, and connects rough (smooth) boundaries.&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;This figure has been copied with a permission from the authors of Ref.5&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_13_-_Topological_Quantum_Error_Correction&amp;diff=1931</id>
		<title>Chapter 13 - Topological Quantum Error Correction</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_13_-_Topological_Quantum_Error_Correction&amp;diff=1931"/>
		<updated>2012-11-11T06:23:34Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Syndrome Extraction and Error Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Surface Code ==&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
Surface codes  are topological quantum error-correcting codes in which we can think of qubits being arranged on a 2-D lattice of qubits with only nearest neighbor interactions. This in practice may prove to be a very useful feature, since for many systems interacting qubits that are close to each other is substantially less difficult than ones that are further apart. We can think of physical qubits as being arranged on the edges of a lattice as shown in Figure 1. An example of the surface code are toric code and  planar code, the main difference between both of them is the boundary condition. In the toric code, the boundaries  are periodic whereas  in the case of the planar code, the boundaries are not periodic. In the toric code, the qubits are arranged on a lattice which can be thought of as spread over a surface of a torus, and in a planar code case we think of the data qubits as living on a simple 2-D plane and ancilla qubits on the faces and the intersections.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:lattice.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 1'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;A two-dimensional array implementation of the surface code. Data qubits are open circles, measurements (ancilla) qubits are filled circles.&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;The yellow area is to  measure-Z qubits while the green area is to  measure-X qubits.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The stabilizer generators for the surface code are the tensor products of ''Z'' on the four data qubits around each face, and the tensor products of ''X'' on&lt;br /&gt;
the four data qubits around each intersection. Neighbouring stabilizers share two data qubits ensuring that adjacent ''X'' and ''Z'' stabilizers commute. The qubit ''Z'' eigenstate are called the ground state &amp;lt;math&amp;gt;\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;  and the excited state &amp;lt;math&amp;gt;\left\vert{e}\right\rangle&amp;lt;/math&amp;gt;. The ground state is the ''+1'' eigenstate of ''Z'', with &amp;lt;math&amp;gt;Z\left\vert{g}\right\rangle=+\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;, and the excited state is the ''-1'' eigenstate, with &amp;lt;math&amp;gt;Z\left\vert{e}\right\rangle=-\left\vert{e}\right\rangle&amp;lt;/math&amp;gt;. It is tempting to think of the qubit as a kind of quantum transistor, with the ground state corresponding to &amp;quot;off&amp;quot; and the excited state to &amp;quot;on&amp;quot;. However, in distinct contrast to a classical logical element, a qubit can exist in a superposition of its eigenstate, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle=\alpha\left\vert{g}\right\rangle+\beta\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;, so a qubit can be both &amp;quot;off&amp;quot; and on&amp;quot; at the same time. A measurment &amp;lt;math&amp;gt;M_Z&amp;lt;/math&amp;gt; of the qubit will however return only one of two possible measurement outcomes,''+1'' with the qubit state projected to &amp;lt;math&amp;gt;\left\vert{g}\right\rangle&amp;lt;/math&amp;gt;, or ''-1'' with the qubit state projected to &amp;lt;math&amp;gt;\left\vert{e}\right\rangle&amp;lt;/math&amp;gt; .&lt;br /&gt;
&lt;br /&gt;
A planar code has four boundaries, two that are called “smooth” and two that are called “rough”. Smooth boundaries have four-term ''Z'' stabilizer generators, and three-term ''X'' stabilizer generators, whereas rough boundaries have four-term ''X'' stabilizer generators and three-term ''Z'' stabilizer generators. A planar code, with two rough and two smooth boundaries can encode a single logical qubit (as in Figure 2). Also look at http://arxiv.org/abs/1208.0928 it is an excellent reference as it represents a comprehensive review of the surface code, also written for the absolute beginner.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:boundaries.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 2'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;Examples of smooth and rough boundaries. This figure has been copied with a permission from the authors of Ref.2&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Syndrome Extraction and Error Detection===&lt;br /&gt;
Detecting errors involves measuring check operators, and observing which ones give a value of ''-1'' (due to anti-commuting with errors). This information helps us guess where errors occurred. In practice, of course, errors do not have to occur on their own, and often one can observe multiple instances next to each other. In these cases, the error operators form error chains throughout the lattice. Since only&lt;br /&gt;
the ends of such error chains anti-commute with the check operators, determining where errors occurred often involves guessing the most likely scenario. In the planar case, the chains  connect opposite boundaries of the same type (either left to right, or top to bottom), and&lt;br /&gt;
in the toric case, chains that span all the way across a given dimension of the lattice.  They turn out to the change the encoded, logical&lt;br /&gt;
state of the qubit and hence are called logical errors. Two examples are shown Figures 3 and 4. &amp;lt;math&amp;gt;Z_L&amp;lt;/math&amp;gt; is a chain of ''Z'' operators that connects two rough boundaries, and &amp;lt;math&amp;gt;X_L&amp;lt;/math&amp;gt; chain of ''X'' operators that connects two smooth ones.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:defect.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 3'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;Examples of error syndromes on the Surface code (planar and toric). The state is initialized to the ''+1'' eigenstate of all stabilizers.&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;Shaded qubits indicate locations of ''X'' errors. This figure has been copied with a permission from the authors of Ref.5&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:error.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;A planar surface code in which a logical ''Z (X)'' error is a chain of ''Z (X)''&lt;br /&gt;
operators that spans the whole lattice, and connects rough (smooth) boundaries.&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;This figure has been copied with a permission from the authors of Ref.5&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_13_-_Topological_Quantum_Error_Correction&amp;diff=1918</id>
		<title>Chapter 13 - Topological Quantum Error Correction</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_13_-_Topological_Quantum_Error_Correction&amp;diff=1918"/>
		<updated>2012-11-08T19:32:55Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Surface Code ==&lt;br /&gt;
===Introduction===&lt;br /&gt;
&lt;br /&gt;
Surface codes  are topological quantum error-correcting codes in which we can think of qubits being arranged on a 2-D lattice of qubits with only nearest neighbor interactions. This in practice may prove to be a very useful feature, since for many systems interacting qubits that are close to each other is substantially less difficult than ones that are further apart. We can think of physical qubits as being arranged on the edges of a lattice as shown in Figure 1. An example of the surface code are toric code and  planar code, the main difference between both of them is the boundary condition. In the toric code, the boundaries  are periodic whereas  in the case of the planar code, the boundaries are not periodic. In the toric code, the qubits are arranged on a lattice which can be thought of as spread over a surface of a torus, and in a planar code case we think of the data qubits as living on a simple 2-D plane and ancilla qubits on the faces and the intersections.&lt;br /&gt;
&amp;lt;center&amp;gt;[[File:lattice.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;'''Figure 1'''&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;A two-dimensional array implementation of the surface code. Data qubits are open circles, measurements (ancilla) qubits are filled circles.&amp;lt;/center&amp;gt;&amp;lt;center&amp;gt;The yellow area is to  measure-Z qubits while the green area is to  measure-X qubits.&amp;lt;/center&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_10_-_Fault-Tolerant_Quantum_Computing&amp;diff=1810</id>
		<title>Chapter 10 - Fault-Tolerant Quantum Computing</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_10_-_Fault-Tolerant_Quantum_Computing&amp;diff=1810"/>
		<updated>2012-02-16T23:34:52Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* 3. Carefully Prepare the Ancilla */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Introduction==&lt;br /&gt;
&lt;br /&gt;
As the name implies, fault-tolerant quantum computing means that quantum computations can be performed in spite of errors in the computation.  To ensure that a computation is reliable, one must be able to prevent errors from accumulating.  This could happen, for example, if a small error occurs on one qubit and propagates to many others before it is fixed.  What are all the ways in which a error can occur and how can they be prevented from accumulating to produce erroneous results?  In this chapter, these questions are addressed.  &lt;br /&gt;
&lt;br /&gt;
==Requirements for Fault-Tolerance==&lt;br /&gt;
&lt;br /&gt;
As Preskill puts it in [[Bibliography#LoPopescuSpiller|Lo, Popescu, and Spiller]], one needs to &amp;quot;...sniff out all the ways in which a recovery failure could result from a single error, ...&amp;quot;  Then, in a [[Bibliography#Preskill:prsl|Proc. Roy. Soc. London article]], he gives five laws for reliable quantum computing, as he reviews the results obtained for avoiding failure.  Here, a slightly modified list is discussed.  The list is &lt;br /&gt;
# Be careful not to propagate errors, &lt;br /&gt;
# Copy errors not data,&lt;br /&gt;
# Carefully prepare ancilla, &lt;br /&gt;
# Verify ancilla, &lt;br /&gt;
# Verify the syndrome, &lt;br /&gt;
# Take care with measurements.&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
&lt;br /&gt;
All of these require some explanation.  Let us take them in order.  &lt;br /&gt;
&lt;br /&gt;
===1. Propagation of Errors===&lt;br /&gt;
&lt;br /&gt;
The general statement is that an error should not propagate within a code block and errors should not accumulate.  If there is an error probability is &amp;lt;math&amp;gt; \epsilon\,\!&amp;lt;/math&amp;gt; for one physical qubit, then the objective is to ensure that the block error is reduced by a power.  For protection against one error, the block error should be &amp;lt;math&amp;gt; \epsilon^2\,\!&amp;lt;/math&amp;gt;, and when encoding against &amp;lt;math&amp;gt; t\,\!&amp;lt;/math&amp;gt; errors, the block error should be &amp;lt;math&amp;gt; \epsilon^{t+1}\,\!&amp;lt;/math&amp;gt;.  If an error propagates within a block, the block error becomes &amp;lt;math&amp;gt; \epsilon\,\!&amp;lt;/math&amp;gt; and the encoding has lost all its benefit.  &lt;br /&gt;
&lt;br /&gt;
For example, one should be careful when qubits are reused because error correction procedures can actually propagate errors.  Consider the syndrome measurement in [[#Figure 7.2|Figure 7.2]].  In that circuit, one of the ancillary qubits is used twice to check the parity of a pair of qubits in the bit-flip code.  This, however, can propagate a single error in the ancilla to two qubits in the code block.  However, this code can only detect and correct one error.  Thus such an event would lead to failure.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.3&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.3'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|-&lt;br /&gt;
| [[File:SyndromeNFT.jpg|center|300px]]&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
| [[File:SyndromeFT.jpg|center|300px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.3: Two different syndrome extraction circuits for the three-qubit quantum error correcting code.  The figure on the left is not fault-tolerant.  It is the same as [[#Figure 7.2|Figure 7.2]].  The figure on the right is fault-tolerant.  However, as explained in the text, it cannot be used.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
At first this may not seem likely.  After all, the target bit is the one that is affected.  However, as shown in [[#Figure 7.4|Figure 7.4]] errors can actually propagate from the target to the source, not just from the source to the target, in the CNOT operation.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;div id=&amp;quot;Figure 7.4&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''Figure 7.4'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
| [[File:CNOTgateid1.jpg|center|400px]]&lt;br /&gt;
|}&lt;br /&gt;
Figure 7.4: The above circuit identity was used by Preskill to show that errors can propagate in not-so-obvious ways.  In this particular case, an error can propagate from the source qubit as well as propagating to the target qubit in a CNOT gate.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===2. Copy Errors Not Data===&lt;br /&gt;
&lt;br /&gt;
One ancilla for each physical qubit in the encoded block gives too much information about the state.  If there is one ancilla for each qubit and all are measured, the superposition of &amp;lt;math&amp;gt;\alpha\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; will be destroyed.   This is because the information obtained will result in a &amp;lt;math&amp;gt;|000\rangle \,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;|111\rangle \,\!&amp;lt;/math&amp;gt; state of the system.  The data, not just the error, has been extracted leaving only classical information.  This is clearly unacceptable for quantum information where the details of the state must never be revealed during the computation.  Therefore, another method for ancilla preparation and syndrome extraction must be used.&lt;br /&gt;
&lt;br /&gt;
===3. Carefully Prepare the Ancilla===&lt;br /&gt;
&lt;br /&gt;
In the previous section, the fault-tolerant method extracted too much information.  So what can be done?  The answer is that a different recovery procedure should be used.  Rather than using single qubit ancilla, the ancillary system can composed of several qubits in a special state.  &lt;br /&gt;
&lt;br /&gt;
Here are two examples for the Steane code.  1) Shor proposed the state, is one example of the special preparation of an ancilla state which is the superposition of all even weighted strings,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|\mbox{Shor}\rangle_{\text{anc}}=\frac{1}{\sqrt{8}}\sum_{\text{even } v} |v\rangle_{\text{anc}}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|10.1}}&lt;br /&gt;
In this case, one computes the syndrome by performing four CNOT gates and then measure the ancilla state.  Since the ancilla state is a superposition of all even weighted states, the result will project onto an even weighted string if there is no error and a superposition of odd weight states if there is an error.  Thus giving the syndrome information.  The parity of the ancillary system indicates whether or not there is an error, but without revealing the state of the system.  &lt;br /&gt;
&lt;br /&gt;
The second example was given by Steane.  The ancilla state is given by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; &lt;br /&gt;
|\text{Steane}\rangle_{\text{anc}}=\frac{1}{\sqrt{4}}\sum_{\text{Hamming}} |v\rangle_{\text{anc}}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|10.2}}&lt;br /&gt;
In this case the ancilla are measured directly and the classical Hamming code is used to diagnose the errors using the parity check matrix for the classical code.  &lt;br /&gt;
&lt;br /&gt;
The differences between the two are as follows:&lt;br /&gt;
&lt;br /&gt;
Shor: 6 syndrome bits, 24 ancilla, 24 CNOT gates&lt;br /&gt;
&lt;br /&gt;
Steane: 14 ancilla, 14 CNOT, the ancilla state is more difficult to prepare.&lt;br /&gt;
&lt;br /&gt;
===4. Verifying the Ancilla===&lt;br /&gt;
&lt;br /&gt;
Since the ancilla is so important, and can propagate an error to the logical qubit (encoded block), it must be carefully prepared and checked fro errors.  If there is an error, the ancilla must be thrown away and another ancilla prepared.  This must be repeated until the probability for error in the ancilla is sufficiently low.&lt;br /&gt;
&lt;br /&gt;
===5. Check the Measurement===&lt;br /&gt;
&lt;br /&gt;
Checking the measurement, by measuring more than once for example, is necessary since error correction during the computation is not useful if the output of the computation is flawed.  &lt;br /&gt;
&amp;lt;!--{{Equation|&amp;lt;math&amp;gt; \begin{align}&lt;br /&gt;
 &amp;amp;=   \\&lt;br /&gt;
 &amp;amp;= &lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|7.?}}--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Concatenated Codes==&lt;br /&gt;
&lt;br /&gt;
Concatenating a code means that a quantum error correcting code is used to encode the set of qubits in a code.  That is, each qubit in a code is also an encoded qubit.  One can continue so that the next set of qubits is also a set of encoded qubits.  So for example, in the bit-flip code, one would use three qubits to encode one logical one.  But if each of the qubits used in the encoding is also an encoded qubit, using three qubits, then there are a total of 9.  Continuing will lead to a code which uses &amp;lt;math&amp;gt;L\,\!&amp;lt;/math&amp;gt; levels of encoding using &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt; qubits, say, which will produce a total overhead of &amp;lt;math&amp;gt;n^L\,\!&amp;lt;/math&amp;gt; qubits to encode one logical one.  &lt;br /&gt;
&lt;br /&gt;
There are several advantages of this.  One, the measurement and recovery can (in principle) be carried out in parallel for the different levels.  Thus ''error correction can be performed more efficiently'' than if the code were simply to be encoded using more qubits in each block.  (In other words, simply increasing the distance can increase the error correction recovery procedure.)  &lt;br /&gt;
&lt;br /&gt;
Two, due to this, the scaling of the gate error rate it better.  The gates do not need to be implemented in sequence and the error probability can be reduced assuming the error rate per elementary (physical) gate is low enough.&lt;br /&gt;
&lt;br /&gt;
==Fault-Tolerant Quantum Computing for the Steane Code==&lt;br /&gt;
&lt;br /&gt;
Once a code is concatenated, gates will need to be performed.  &lt;br /&gt;
&lt;br /&gt;
Gates that can be performed bit-wise:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;H, X, CNOT, P \,\!&amp;lt;/math&amp;gt;, where &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; i \end{array}\right) \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, these are not universal.  So they need to be supplemented with another gate. &lt;br /&gt;
&lt;br /&gt;
Solution:  Prepare an ancilla state, check it, and use it to implement a gate using gates from the fault-tolerant set.  &lt;br /&gt;
&lt;br /&gt;
This sets up a timing problem.  How can the ancilla state be prepared at the right time since it is thrown out if it is bad?  One cannot wait until it is needed to prepare it.  If one did wait, the original state is stored while this one is created.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt; \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Fault-Tolerant Quantum Computing for Stabilizer Codes==&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_C_-_Vectors_and_Linear_Algebra&amp;diff=1791</id>
		<title>Appendix C - Vectors and Linear Algebra</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_C_-_Vectors_and_Linear_Algebra&amp;diff=1791"/>
		<updated>2012-01-05T09:03:45Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* The Determinant */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
This appendix introduces some aspects of linear algebra and complex&lt;br /&gt;
algebra that will be helpful for the course.  In addition, Dirac&lt;br /&gt;
notation is introduced and explained.&lt;br /&gt;
&lt;br /&gt;
===Vectors===&lt;br /&gt;
&lt;br /&gt;
Here we review some facts about real vectors before discussing the representation and complex analogues used in quantum mechanics.  &lt;br /&gt;
&lt;br /&gt;
====Real Vectors====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The simple definition of a vector --- an object that has magnitude and&lt;br /&gt;
direction --- is helpful to keep in mind even when dealing with complex&lt;br /&gt;
and/or abstract vectors as we will here.  In three dimensional space,&lt;br /&gt;
a vector is often written as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = v_x\hat{x} + v_y \hat{y} + v_z\hat{z},&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where the hat (&amp;lt;math&amp;gt;\hat{\cdot}\,\!&amp;lt;/math&amp;gt;) denotes a unit vector and the components&lt;br /&gt;
&amp;lt;math&amp;gt;v_i\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i = x,y,z\,\!&amp;lt;/math&amp;gt; are just numbers.  The unit vectors are also&lt;br /&gt;
known as ''basis'' vectors. &amp;lt;!-- \index{basis vectors!real} --&amp;gt; &lt;br /&gt;
This is because any vector&lt;br /&gt;
in real three-dimensional space can be written in terms of these unit/basis vectors.  In&lt;br /&gt;
some sense they are the basic components of any vector.  Other basis&lt;br /&gt;
vectors could be used, however, such as in spherical and cylindrical coordinates.  When dealing with more abstract and/or complex vectors,&lt;br /&gt;
it is often helpful to ask what one would do for an ordinary&lt;br /&gt;
three-dimensional vector.  For example, properties of unit vectors,&lt;br /&gt;
dot products, etc. in three-dimensions are similar to the analogous&lt;br /&gt;
constructions in higher dimensions.  &lt;br /&gt;
&lt;br /&gt;
The ''inner product'',&amp;lt;!-- \index{inner product}--&amp;gt; or ''dot product'',&amp;lt;!--\index{dot product}--&amp;gt; for two real three-dimensional vectors,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = v_x\hat{x} + v_y \hat{y} + v_z\hat{z}, \;\; &lt;br /&gt;
\vec{w} = w_x\hat{x} + w_y \hat{y} + w_z\hat{z},&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
can be computed as follows:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}\cdot\vec{w} = v_xw_x + v_yw_y + v_zw_z.&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
For the inner product of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt; with itself, we get the square of&lt;br /&gt;
the magnitude of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt;, denoted &amp;lt;math&amp;gt;|\vec{v}|^2\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
|\vec{v}|^2 = \vec{v}\cdot\vec{v} = v_xv_x + v_yv_y +&lt;br /&gt;
v_zv_z=v_x^2+v_y^2+v_z^2. &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If we want a unit vector in the direction of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt;, we can simply divide it&lt;br /&gt;
by its magnitude:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{v} = \frac{\vec{v}}{|\vec{v}|}.  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now, of course, &amp;lt;math&amp;gt;\hat{v}\cdot\hat{v}= 1\,\!&amp;lt;/math&amp;gt;, which can easily be checked.  &lt;br /&gt;
&lt;br /&gt;
There are several ways to represent a vector.  The ones we will use&lt;br /&gt;
most often are column and row vector notations.  So, for example, we&lt;br /&gt;
could write the vector above as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = \left(\begin{array}{c} v_x \\ v_y \\ v_z&lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
In this case, our unit vectors are represented by the following: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{x} = \left(\begin{array}{c} 1 \\ 0 \\ 0&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
\hat{y} = \left(\begin{array}{c} 0 \\ 1 \\ 0&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
\hat{z} = \left(\begin{array}{c} 0 \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We next turn to the subject of complex vectors and the relevant&lt;br /&gt;
notation. &lt;br /&gt;
We will see how to compute the inner product later, since some other&lt;br /&gt;
definitions are required.&lt;br /&gt;
&lt;br /&gt;
====Complex Vectors====&lt;br /&gt;
&lt;br /&gt;
For complex vectors in quantum mechanics, Dirac notation is used most often.  This notation uses a &amp;lt;math&amp;gt;\left\vert \cdot \right\rangle\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
called a ''ket'', for a vector.  So our vector &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt; would be&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert v \right\rangle  = \left(\begin{array}{c} v_x \\ v_y \\ v_z&lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For qubits, i.e. two-state quantum systems, complex vectors will often be used:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align} \left\vert \psi \right\rangle &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta &lt;br /&gt;
  \end{array}\right) \\&lt;br /&gt;
           &amp;amp;=\alpha \left\vert 0\right\rangle + \beta\left\vert 1\right\rangle,\end{align}&amp;lt;/math&amp;gt;|C.1}} &lt;br /&gt;
where&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert 0\right\rangle = \left(\begin{array}{c} 1 \\ 0 &lt;br /&gt;
  \end{array}\right), \;\;\mbox{and} \;\;&lt;br /&gt;
\left\vert 1\right\rangle = \left(\begin{array}{c} 0 \\ 1 &lt;br /&gt;
  \end{array}\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
are the basis vectors.  The two numbers &amp;lt;math&amp;gt;\alpha\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; are complex numbers, so the vector is said to&lt;br /&gt;
be a complex vector.&lt;br /&gt;
&lt;br /&gt;
====Inner Product====&lt;br /&gt;
&lt;br /&gt;
Now let us suppose we have another complex vector,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \phi \right\rangle  = \left(\begin{array}{c} \gamma \\ \delta &lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The ''inner product'' between two vectors is often written as &amp;lt;math&amp;gt;\left\langle \phi \vert \psi \right\rangle \;\! &amp;lt;/math&amp;gt;, which is the same as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align} (\left\vert \phi \right\rangle )^\dagger\left\vert \psi \right\rangle &lt;br /&gt;
&amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right)^\dagger&lt;br /&gt;
\left(\begin{array}{c} \alpha \\ \beta   \end{array}\right) \\&lt;br /&gt;
           &amp;amp;= \left(\begin{array}{cc} \gamma^* &amp;amp; \delta^* \end{array}\right) \left(\begin{array}{c} \alpha \\ \beta   \end{array}\right) \\  &lt;br /&gt;
           &amp;amp;= \gamma^*\alpha + \delta^*\beta \end{align} \;\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Outer Product====&lt;br /&gt;
&lt;br /&gt;
The ''outer product'' between these same two vectors is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\begin{align} (\left\vert \phi \right\rangle )(\left\vert \psi \right\rangle)^\dagger &lt;br /&gt;
 &amp;amp;=  \left\vert \phi \right\rangle \left\langle \psi \right\vert \\&lt;br /&gt;
&amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right)&lt;br /&gt;
\left(\begin{array}{c} \alpha \\ \beta   \end{array}\right)^\dagger \\&lt;br /&gt;
           &amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right) \left(\begin{array}{cc} \alpha^* &amp;amp; \beta^*   \end{array}\right) \\  &lt;br /&gt;
           &amp;amp;=   \left(\begin{array}{cc} \gamma\alpha^* &amp;amp; \gamma\beta^* \\  \delta\alpha^* &amp;amp; \delta\beta^*  \end{array}\right) \end{align}\;\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Linear Algebra: Matrices===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are many aspects of linear algebra that are quite useful in&lt;br /&gt;
quantum mechanics.  We will briefly discuss several of these aspects here.&lt;br /&gt;
First, some definitions and properties are provided that will&lt;br /&gt;
be useful.  Some familiarity with matrices&lt;br /&gt;
will be assumed, although many basic definitions are also included.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Let us denote some &amp;lt;math&amp;gt;m\times n\,\!&amp;lt;/math&amp;gt; matrix by &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.  The set of all &amp;lt;math&amp;gt;m\times&lt;br /&gt;
n\,\!&amp;lt;/math&amp;gt; matrices with real entries is &amp;lt;math&amp;gt;M(n\times m,\mathbb{R})\,\!&amp;lt;/math&amp;gt;.  Such matrices&lt;br /&gt;
are said to be real since they have all real entries.  Similarly, the&lt;br /&gt;
set of &amp;lt;math&amp;gt;m\times n\,\!&amp;lt;/math&amp;gt; complex matrices is &amp;lt;math&amp;gt;M(m\times n,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.  For the&lt;br /&gt;
set of set of square &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; complex matrices, we simply write&lt;br /&gt;
&amp;lt;math&amp;gt;M(n,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We will also refer to the set of matrix elements, &amp;lt;math&amp;gt;a_{ij}\,\!&amp;lt;/math&amp;gt;, where the&lt;br /&gt;
first index (&amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; in this case) labels the row and the second &amp;lt;math&amp;gt;(j)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
labels the column.  Thus the element &amp;lt;math&amp;gt;a_{23}\,\!&amp;lt;/math&amp;gt; is the element in the&lt;br /&gt;
second row and third column.  A comma is inserted if there is some&lt;br /&gt;
ambiguity.  For example, in a large matrix the element in the&lt;br /&gt;
2nd row and 12th&lt;br /&gt;
column is written as &amp;lt;math&amp;gt;a_{2,12}\,\!&amp;lt;/math&amp;gt; to distinguish between the&lt;br /&gt;
21st row and 2nd column.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Complex Conjugate====&lt;br /&gt;
&lt;br /&gt;
The ''complex conjugate of a matrix'' &amp;lt;!-- \index{complex conjugate!of a matrix}--&amp;gt;&lt;br /&gt;
is the matrix with each element replaced by its complex conjugate.  In&lt;br /&gt;
other words, to take the complex conjugate of a matrix, one takes the&lt;br /&gt;
complex conjugate of each entry in the matrix.  We denote the complex&lt;br /&gt;
conjugate with a star, like this: &amp;lt;math&amp;gt;A^*\,\!&amp;lt;/math&amp;gt;.  For example,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A^* &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{12} &amp;amp; a_{13} \\&lt;br /&gt;
        a_{21} &amp;amp; a_{22} &amp;amp; a_{23} \\&lt;br /&gt;
        a_{31} &amp;amp; a_{32} &amp;amp; a_{33} \end{array}\right)^*  \\&lt;br /&gt;
    &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11}^* &amp;amp; a_{12}^* &amp;amp; a_{13}^* \\&lt;br /&gt;
        a_{21}^* &amp;amp; a_{22}^* &amp;amp; a_{23}^* \\&lt;br /&gt;
        a_{31}^* &amp;amp; a_{32}^* &amp;amp; a_{33}^* \end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.2}}&lt;br /&gt;
(Notice that the notation for a matrix is a capital letter, whereas&lt;br /&gt;
the entries are represented by lower case&lt;br /&gt;
letters.)&lt;br /&gt;
&lt;br /&gt;
====Transpose====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''transpose'' &amp;lt;!-- \index{transpose} --&amp;gt; of a matrix is the same set of&lt;br /&gt;
elements, but now the first row becomes the first column, the second row&lt;br /&gt;
becomes the second column, and so on.  Thus the rows and columns are&lt;br /&gt;
interchanged.  For example, for a square &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix, the&lt;br /&gt;
transpose is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A^T &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{12} &amp;amp; a_{13} \\&lt;br /&gt;
        a_{21} &amp;amp; a_{22} &amp;amp; a_{23} \\&lt;br /&gt;
        a_{31} &amp;amp; a_{32} &amp;amp; a_{33} \end{array}\right)^T \\&lt;br /&gt;
    &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{21} &amp;amp; a_{31} \\&lt;br /&gt;
        a_{12} &amp;amp; a_{22} &amp;amp; a_{32} \\&lt;br /&gt;
        a_{13} &amp;amp; a_{23} &amp;amp; a_{33} \end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.3}}&lt;br /&gt;
&lt;br /&gt;
====Hermitian Conjugate====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The complex conjugate and transpose of a matrix is called the ''Hermitian conjugate'', or simply the ''dagger'' of a matrix.  It is called the dagger because the symbol used to denote it,&lt;br /&gt;
(&amp;lt;math&amp;gt;\dagger\,\!&amp;lt;/math&amp;gt;):&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(A^T)^* = (A^*)^T \equiv A^\dagger.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.4}}&lt;br /&gt;
For our &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; example, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A^\dagger = \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11}^* &amp;amp; a_{21}^* &amp;amp; a_{31}^* \\&lt;br /&gt;
        a_{12}^* &amp;amp; a_{22}^* &amp;amp; a_{32}^* \\&lt;br /&gt;
        a_{13}^* &amp;amp; a_{23}^* &amp;amp; a_{33}^* \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If a matrix is its own Hermitian conjugate, i.e. &amp;lt;math&amp;gt;A^\dagger = A\,\!&amp;lt;/math&amp;gt;, then&lt;br /&gt;
we call it a ''Hermitian matrix''.  &amp;lt;!-- \index{Hermitian matrix}--&amp;gt;&lt;br /&gt;
(Clearly this is only possible for square matrices.) Hermitian&lt;br /&gt;
matrices are very important in quantum mechanics since their&lt;br /&gt;
eigenvalues are real.  (See Sec.([[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|Eigenvalues and Eigenvectors]]).)&lt;br /&gt;
&lt;br /&gt;
====Index Notation====&lt;br /&gt;
&lt;br /&gt;
Very often we write the product of two matrices &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; simply as&lt;br /&gt;
&amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;C=AB\,\!&amp;lt;/math&amp;gt;.  However, it is also quite useful to write this&lt;br /&gt;
in component form.  In this case, if these are &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrices, the component form will be &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ik} = \sum_{j=1}^n a_{ij}b_{jk}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This says that the element in the &amp;lt;math&amp;gt;i^{\mbox{th}}\,\!&amp;lt;/math&amp;gt; row and&lt;br /&gt;
&amp;lt;math&amp;gt;j^{\mbox{th}}\,\!&amp;lt;/math&amp;gt; column of the matrix &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the sum &amp;lt;math&amp;gt;\sum_{j=1}^n&lt;br /&gt;
a_{ij}b_{jk}\,\!&amp;lt;/math&amp;gt;.  The transpose of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; has elements&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ki} = \sum_{j=1}^n a_{kj}b_{ji}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now if we were to transpose &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; as well, this would read&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ki} = \sum_{j=1}^n (a_{jk})^T (b_{ij})^T = \sum_{j=1}^n b^T_{ij} a^T_{jk}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This gives us a way of seeing the general rule that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
C^T = B^TA^T.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It follows that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
C^\dagger = B^\dagger A^\dagger.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Trace====&lt;br /&gt;
&lt;br /&gt;
The ''trace'' &amp;lt;!-- \index{trace}--&amp;gt; of a matrix is the sum of the diagonal&lt;br /&gt;
elements and is denoted &amp;lt;math&amp;gt;\mbox{Tr}\,\!&amp;lt;/math&amp;gt;.  So for example, the trace of an&lt;br /&gt;
&amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(A) = \sum_{i=1}^n a_{ii}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some useful properties of the trace are the following:&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(AB) = \mbox{Tr}(BA)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(A + B) = \mbox{Tr}(A) + \mbox{Tr}(B)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Using the first of these results,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(UAU^{-1}) = \mbox{Tr}(U^{-1}UA) = \mbox{Tr}(A).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This relation is used so often that we state it here explicitly.&lt;br /&gt;
&lt;br /&gt;
====The Determinant====&lt;br /&gt;
&lt;br /&gt;
For a square matrix, the determinant is quite a useful thing.  For&lt;br /&gt;
example, an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix is invertible if and only if its&lt;br /&gt;
determinant is not zero.  So let us define the determinant and give&lt;br /&gt;
some properties and examples.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''determinant''&amp;lt;!--\index{determinant}--&amp;gt; of a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrix, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
N = \left(\begin{array}{cc}&lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d \end{array}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.5}}&lt;br /&gt;
is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(N) = ad-bc.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.6}}&lt;br /&gt;
Higher-order determinants can be written in terms of smaller ones in&lt;br /&gt;
the standard way.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''determinant''&amp;lt;!-- \index{determinant}--&amp;gt; of a matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be&lt;br /&gt;
also be written in terms of its components as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = \sum_{i,j,k,l,...} \epsilon_{ijkl...}&lt;br /&gt;
a_{1i}a_{2j}a_{3k}a_{4l} ...,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.7}}&lt;br /&gt;
where the symbol &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{ijkl...} = \begin{cases}&lt;br /&gt;
                       +1, \; \mbox{if } \; ijkl... = 1234... (\mbox{in order, or any even number of permutations}),\\&lt;br /&gt;
                       -1, \; \mbox{if } \; ijkl... = 2134... (\mbox{or any odd number of permutations}),\\&lt;br /&gt;
                       \;\;\; 0, \; \mbox{otherwise}, \; (\mbox{meaning any index is repeated}).&lt;br /&gt;
                      \end{cases}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.8}}&lt;br /&gt;
&lt;br /&gt;
Let us consider the example of the &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; given&lt;br /&gt;
above.  The determinant can be calculated by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = \sum_{i,j,k} \epsilon_{ijk}&lt;br /&gt;
a_{1i}a_{2j}a_{3k},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where, explicitly, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{ijk} = \begin{cases}&lt;br /&gt;
                       +1, \;\mbox{if }\; ijk= 123,231,\; \mbox{or}\; 312, (\mbox{These are even permutations of }123),\\&lt;br /&gt;
                       -1, \;\mbox{if }\; ijk = 213,132,\;\mbox{or}\;321(\mbox{These are odd permuations of }123),\\&lt;br /&gt;
                    \;\;\;  0, \; \mbox{otherwise}, \; (\mbox{meaning any index is repeated}),&lt;br /&gt;
                 \end{cases}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.9}}&lt;br /&gt;
so that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\det(A) &amp;amp;=&amp;amp; \epsilon_{123}a_{11}a_{22}a_{33} &lt;br /&gt;
         +\epsilon_{132}a_{11}a_{23}a_{32}&lt;br /&gt;
         +\epsilon_{231}a_{12}a_{23}a_{31}  \\&lt;br /&gt;
       &amp;amp;&amp;amp;+\epsilon_{213}a_{12}a_{21}a_{33}&lt;br /&gt;
         +\epsilon_{312}a_{13}a_{21}a_{32}&lt;br /&gt;
         +\epsilon_{321}a_{13}a_{22}a_{31}.&lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.10}}&lt;br /&gt;
Now given the values of &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; in [[#eqC.9|Eq. C.9]],&lt;br /&gt;
this is&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = a_{11}a_{22}a_{33} - a_{11}a_{23}a_{32} + a_{12}a_{23}a_{31} &lt;br /&gt;
         - a_{12}a_{21}a_{33} + a_{13}a_{21}a_{32} - a_{13}a_{22}a_{31}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The determinant has several properties that are useful to know.  A few are listed here:  &lt;br /&gt;
#The determinant of the transpose of a matrix is the same as the determinant of the matrix itself: &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(A) = \det(A^T).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
#The determinant of a product is the product of determinants:    &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(AB) = \det(A)\det(B).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
From this last property, another specific property can be derived.&lt;br /&gt;
If we take the determinant of the product of a matrix and its&lt;br /&gt;
inverse, we find&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(U U^{-1}) = \det(U)\det(U^{-1}) = \det(\mathbb{I}) = 1,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
since the determinant of the identity is one.  This implies that&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(U^{-1}) = \frac{1}{\det(U)}.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Inverse of a Matrix====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The inverse &amp;lt;!-- \index{inverse}--&amp;gt; of a square matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is another matrix,&lt;br /&gt;
denoted &amp;lt;math&amp;gt;A^{-1}\,\!&amp;lt;/math&amp;gt;, such that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
AA^{-1} = A^{-1}A = \mathbb{I},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; is the identity matrix consisting of zeroes everywhere&lt;br /&gt;
except the diagonal, which has ones.  For example, the &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}_3 = \left(\begin{array}{ccc} 1 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; 1 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is important to note that ''a matrix is invertible if and only if its determinant is nonzero.''  Thus one only needs to calculate the&lt;br /&gt;
determinant to see if a matrix has an inverse or not.&lt;br /&gt;
&lt;br /&gt;
====Hermitian Matrices====&lt;br /&gt;
&lt;br /&gt;
Hermitian matrices are important for a variety of reasons; primarily, it is because their eigenvalues are real.  Thus Hermitian matrices are used to represent density operators and density matrices, as well as Hamiltonians.  The density operator is a positive semi-definite Hermitian matrix (it has no negative eigenvalues) that has its trace equal to one.  In any case, it is often desirable to represent &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices using a real linear combination of a complete set of &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices.  A set of &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices is complete if any Hermitian matrix can be represented in terms of the set.  Let &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt; be a complete set.  Then any Hermitian matrix can be represented by &amp;lt;math&amp;gt;\sum_i a_i \lambda_i\,\!&amp;lt;/math&amp;gt;.  The set can always be taken to be a set of traceless Hermitian matrices and the identity matrix.  This is convenient for the density matrix (its trace is one) because the identity part of an &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrix is &amp;lt;math&amp;gt;(1/N)\mathbb{I}\,\!&amp;lt;/math&amp;gt; if we take all others in the set to be traceless.  For the Hamiltonian, the set consists of a traceless part and an identity part where identity part just gives an overall phase which can often be neglected.  &lt;br /&gt;
&lt;br /&gt;
One example of such a set which is extremely useful is the set of Pauli matrices.  These are discussed in detail in [[Chapter 2 - Qubits and Collections of Qubits|Chapter 2]] and in particular in [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|Section 2.4]].&lt;br /&gt;
&lt;br /&gt;
====Unitary Matrices====&lt;br /&gt;
&lt;br /&gt;
A ''unitary matrix'' &amp;lt;!-- \index{unitary matrix} --&amp;gt; &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is one whose&lt;br /&gt;
inverse is also its Hermitian conjugate, &amp;lt;math&amp;gt;U^\dagger = U^{-1}\,\!&amp;lt;/math&amp;gt;, so that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
U^\dagger U = UU^\dagger = \mathbb{I}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If the unitary matrix also has determinant one, it is said to be ''a special unitary matrix''.&amp;lt;!-- \index{special unitary matrix}--&amp;gt;  The set of&lt;br /&gt;
&amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; unitary matrices is denoted&lt;br /&gt;
&amp;lt;math&amp;gt;U(n)\,\!&amp;lt;/math&amp;gt; and the set of special unitary matrices is denoted &amp;lt;math&amp;gt;SU(n)\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Unitary matrices are particularly important in quantum mechanics&lt;br /&gt;
because they describe the evolution of quantum states.&lt;br /&gt;
They have this ability due to the fact that the rows and columns of unitary matrices (viewed as vectors) are orthonormal. (This is made clear in an example below.)  This means that when&lt;br /&gt;
they act on a basis vector of the form&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \left\vert j\right\rangle = &lt;br /&gt;
 \left(\begin{array}{c} 0 \\ 0 \\ \vdots \\ 1 \\ \vdots \\ 0 &lt;br /&gt;
  \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.11}}&lt;br /&gt;
with a single 1, in say the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th spot, and zeroes everywhere else, the result is a normalized complex vector.  Acting on a set of&lt;br /&gt;
orthonormal vectors of the form given in Eq.[[#eqC.11|(C.11)]]&lt;br /&gt;
will produce another orthonormal set.  &lt;br /&gt;
&lt;br /&gt;
Let us consider the example of a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U = \left(\begin{array}{cc} &lt;br /&gt;
              a &amp;amp; b \\ &lt;br /&gt;
              c &amp;amp; d &lt;br /&gt;
           \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.12}}&lt;br /&gt;
The inverse of this matrix is the Hermitian conjugate, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U ^{-1} = U^\dagger = \left(\begin{array}{cc} &lt;br /&gt;
                         a^* &amp;amp; c^* \\ &lt;br /&gt;
                         b^* &amp;amp; d^* &lt;br /&gt;
                       \end{array}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.13}}&lt;br /&gt;
provided that the matrix &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; satisfies the constraints&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|a|^2 + |b|^2 = 1, \; &amp;amp; \; ac^*+bd^* =0  \\&lt;br /&gt;
ca^*+db^*=0,  \;      &amp;amp;  \; |c|^2 + |d|^2 =1,&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|C.14}}&lt;br /&gt;
and&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|a|^2 + |c|^2 = 1, \; &amp;amp; \; ba^*+dc^* =0  \\&lt;br /&gt;
b^*a+d^*c=0,  \;      &amp;amp;  \; |b|^2 + |d|^2 =1.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|C.15}}&lt;br /&gt;
Looking at each row as a vector, the constraints in&lt;br /&gt;
Eq.[[#eqC.14|(C.14)]] are the orthonormality conditions for the&lt;br /&gt;
vectors forming the rows.  Similarly, the constraints in&lt;br /&gt;
Eq.[[#eqC.15|(C.15)]] are the orthonormality conditions for the&lt;br /&gt;
vectors forming the columns.&lt;br /&gt;
&lt;br /&gt;
===More Dirac Notation===&lt;br /&gt;
&lt;br /&gt;
Now that we have a definition for the Hermitian conjugate, we consider the&lt;br /&gt;
case for a &amp;lt;math&amp;gt;1\times n\,\!&amp;lt;/math&amp;gt; matrix, i.e. a vector.  In Dirac notation, this is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi\right\rangle = \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The Hermitian conjugate comes up so often that we use the following&lt;br /&gt;
notation for vectors:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle \psi\right\vert = (\left\vert\psi\right\rangle)^\dagger = \left(\begin{array}{c} \alpha \\&lt;br /&gt;
    \beta \end{array}\right)^\dagger &lt;br /&gt;
 = \left( \alpha^*, \; \beta^* \right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a row vector and in Dirac notation is denoted by the symbol &amp;lt;math&amp;gt;\left\langle\cdot \right\vert\!&amp;lt;/math&amp;gt;, which is called a ''bra''.  Let us consider a second complex vector, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\phi\right\rangle = \left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The ''inner product'' &amp;lt;!-- \index{inner product}--&amp;gt; between &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; and &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert\phi\right\rangle\,\!&amp;lt;/math&amp;gt; is computed as follows:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \left\langle\phi\mid\psi\right\rangle &amp;amp; \equiv (\left\vert\phi\right\rangle)^\dagger\left\vert\psi \right\rangle   \\&lt;br /&gt;
                  &amp;amp;= (\gamma^*,\delta^*) \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                  &amp;amp;= \gamma^*\alpha + \delta^*\beta.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.16}}&lt;br /&gt;
If these two vectors are ''orthogonal'', &amp;lt;!-- \index{orthogonal!vectors} --&amp;gt;&lt;br /&gt;
then their inner product is zero, or &amp;lt;math&amp;gt;\left\langle\phi\mid\psi\right\rangle =0\,\!&amp;lt;/math&amp;gt;.  (The &amp;lt;math&amp;gt; \left\langle\phi\mid\psi\right\rangle \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
is called a ''bracket'', which is the product of the ''bra'' and the ''ket''.)  The inner product of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; with itself is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\psi\mid\psi\right\rangle = |\alpha|^2 + |\beta|^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This vector is considered normalized when &amp;lt;math&amp;gt;\left\langle\psi\mid\psi\right\rangle = 1\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
More generally, we will consider vectors in &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; dimensions.  In this&lt;br /&gt;
case we write the vector in terms of a set of basis vectors,&lt;br /&gt;
&amp;lt;math&amp;gt;\{\left\vert i\right\rangle\}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;i = 0,1,2,...N-1\,\!&amp;lt;/math&amp;gt;.  This is an ordered set of&lt;br /&gt;
vectors which are labeled simply by integers.  If the set is orthogonal,&lt;br /&gt;
then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid j\right\rangle = 0, \;\; \mbox{for all }i\neq j.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If they are normalized, then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i \mid i \right\rangle = 1, \;\;\mbox{for all } i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If both of these are true, i.e. the entire set is orthonormal, we can&lt;br /&gt;
write&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid j\right\rangle = \delta_{ij}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where the symbol &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; is called the Kronecker delta &amp;lt;!-- \index{Kronecker delta} --&amp;gt; and is defined by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\delta_{ij} = \begin{cases}&lt;br /&gt;
               1, &amp;amp; \mbox{if } i=j, \\&lt;br /&gt;
               0, &amp;amp; \mbox{if } i\neq j.&lt;br /&gt;
              \end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.17}}&lt;br /&gt;
Now consider &amp;lt;math&amp;gt;(N+1)\,\!&amp;lt;/math&amp;gt;-dimensional vectors by letting two such vectors&lt;br /&gt;
be expressed in the same basis as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \Psi\right\rangle = \sum_{i=0}^{N} \alpha_i\left\vert i\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\Phi\right\rangle = \sum_{j=0}^{N} \beta_j\left\vert j\right\rangle.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then the inner product &amp;lt;!--\index{inner product}--&amp;gt; is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\langle\Psi\mid\Phi\right\rangle &amp;amp;= \left(\sum_{i=0}^{N}&lt;br /&gt;
             \alpha_i\left\vert i\right\rangle\right)^\dagger\left(\sum_{j=0}^{N} \beta_j\left\vert j\right\rangle\right)  \\&lt;br /&gt;
                 &amp;amp;= \sum_{ij} \alpha_i^*\beta_j\left\langle i\mid j\right\rangle  \\&lt;br /&gt;
                 &amp;amp;= \sum_{ij} \alpha_i^*\beta_j\delta_{ij}  \\&lt;br /&gt;
                 &amp;amp;= \sum_i\alpha^*_i\beta_i,&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.18}}&lt;br /&gt;
where the fact that the delta function is zero unless&lt;br /&gt;
&amp;lt;math&amp;gt;i=j\,\!&amp;lt;/math&amp;gt; is used to obtain the last equality.  Taking the inner product of a vector&lt;br /&gt;
with itself will get&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\Psi\mid\Psi\right\rangle = \sum_i\alpha^*_i\alpha_i = \sum_i|\alpha_i|^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This immediately gives us a very important property of the inner&lt;br /&gt;
product.  It tells us that, in general,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\Phi\mid\Phi\right\rangle \geq 0, \;\; \mbox{and} \;\; \left\langle\Phi\mid \Phi\right\rangle = 0&lt;br /&gt;
\Leftrightarrow \left\vert\Phi\right\rangle = 0. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
(The symbol &amp;lt;math&amp;gt;\Leftrightarrow\,\!&amp;lt;/math&amp;gt; means &amp;lt;nowiki&amp;gt;&amp;quot;if and only if,&amp;quot;&amp;lt;/nowiki&amp;gt; sometimes written as &amp;lt;nowiki&amp;gt;&amp;quot;iff.&amp;quot;&amp;lt;/nowiki&amp;gt;)  &lt;br /&gt;
&lt;br /&gt;
We could also expand a vector in a different basis.  Let us suppose&lt;br /&gt;
that the set &amp;lt;math&amp;gt;\{\left\vert e_k \right\rangle\}\,\!&amp;lt;/math&amp;gt; is an orthonormal basis &amp;lt;math&amp;gt;(\left\langle e_k \mid e_l\right\rangle =&lt;br /&gt;
\delta_{kl})\,\!&amp;lt;/math&amp;gt; that is different from the one considered earlier.  We&lt;br /&gt;
could expand our vector &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt; in terms of our new basis by&lt;br /&gt;
expanding our new basis in terms of our old basis.  Let us first&lt;br /&gt;
expand the &amp;lt;math&amp;gt;\left\vert e_k\right\rangle\,\!&amp;lt;/math&amp;gt; in terms of the &amp;lt;math&amp;gt;\left\vert j\right\rangle\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert e_k\right\rangle= \sum_j \left\vert j\right\rangle \left\langle j\mid e_k\right\rangle,&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.19}}&lt;br /&gt;
so that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert \Psi\right\rangle &amp;amp;= \sum_j \alpha_j\left\vert j\right\rangle  \\&lt;br /&gt;
           &amp;amp;= \sum_{j}\sum_k\alpha_j\left\vert e_k \right\rangle \left\langle e_k \mid j\right\rangle  \\ &lt;br /&gt;
           &amp;amp;= \sum_k \alpha_k^\prime \left\vert e_k\right\rangle, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.20}}&lt;br /&gt;
where &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\alpha_k^\prime = \sum_j \alpha_j \left\langle e_k \mid j\right\rangle. &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.21}}&lt;br /&gt;
Notice that the insertion of &amp;lt;math&amp;gt;\sum_k\left\vert e_k\right\rangle\left\langle e_k\right\vert\,\!&amp;lt;/math&amp;gt; didn't do anything to our original vector; it is the same vector, just in a&lt;br /&gt;
different basis.  Therefore, this is effectively the identity operator,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I} = \sum_k\left\vert e_k \right\rangle\left\langle e_k\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is an important and quite useful relation.  &lt;br /&gt;
To interpret Eq.[[#eqC.19|(C.19)]], we can draw a close&lt;br /&gt;
analogy with three-dimensional real vectors.  The inner product&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert e_k \right\rangle \left\langle j \right\vert\,\!&amp;lt;/math&amp;gt; can be interpreted as the projection of one vector onto&lt;br /&gt;
another.  This provides the part of &amp;lt;math&amp;gt;\left\vert j \right\rangle\,\!&amp;lt;/math&amp;gt; along &amp;lt;math&amp;gt;\left\vert e_k \right\rangle\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Transformations===&lt;br /&gt;
&lt;br /&gt;
Suppose we have two different orthogonal bases, &amp;lt;math&amp;gt;\{e_k\}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{j\}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The numbers &amp;lt;math&amp;gt;\left\langle e_k\mid j\right\rangle\,\!&amp;lt;/math&amp;gt; for all the different &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; are&lt;br /&gt;
often referred to as matrix elements since the set forms a matrix, with&lt;br /&gt;
&amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; labelling the rows and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; labelling the columns.  Thus we&lt;br /&gt;
can represent the transformation from one basis to another with a matrix&lt;br /&gt;
transformation.  Let &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; be the matrix with elements &amp;lt;math&amp;gt;m_{kj} =&lt;br /&gt;
\left\langle e_k\mid j\right\rangle\,\!&amp;lt;/math&amp;gt;.  The transformation from one basis to another,&lt;br /&gt;
written in terms of the coefficients of &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; A^\prime = MA, &amp;lt;/math&amp;gt;|C.22}}&lt;br /&gt;
where &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A^\prime = \left(\begin{array}{c} \alpha_1^\prime \\ \alpha_2^\prime \\ \vdots \\&lt;br /&gt;
    \alpha_n^\prime \end{array}\right), \;\; &lt;br /&gt;
\mbox{ and } \;\;&lt;br /&gt;
A = \left(\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\&lt;br /&gt;
    \alpha_n\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This sort of transformation is a change of basis.  Oftentimes when one vector is transformed to another the transformation can be viewed as a transformation of the components of the vector and is also represented by a matrix.  Thus transformations can either be&lt;br /&gt;
represented by the matrix equation, like Eq.[[#eqC.22|(C.22)]], or the components, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\alpha_k^\prime = \sum_j \alpha_j \left\langle e_k \mid j \right\rangle = \sum_j m_{kj}\alpha_j. &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.23}}&lt;br /&gt;
In the case that we consider a matrix transformation of basis elements, we call it a passive transformation.  (The transformation does nothing to the object, but only changes the basis in which the object is described.)  An active transformation is one where the object itself is transformed.  Often these two transformations, active and passive, are very simply related.  However, the distinction can be very important.  &lt;br /&gt;
&lt;br /&gt;
For a general transformation matrix &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; acting on a vector,&lt;br /&gt;
the matrix elements in a particular basis &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
t_{ij} = \left\langle i\right\vert (T) \left\vert j\right\rangle, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
just as elements of a vector can be found using&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid \Psi \right\rangle = \alpha_i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Transformations of a Qubit====&lt;br /&gt;
&lt;br /&gt;
It is worth belaboring the point somewhat and presenting several ways in which to parametrize the set of transformations of a qubit.  A qubit state is represented by a complex two-dimensional vector that has been normalized to one:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \psi\right\rangle = \alpha_0 \left\vert 0 \right\rangle + \alpha_1 \left\vert 1\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1 \end{array}\right), \;\;\;\; |\alpha_0|^2 + |\alpha_1|^2 = 1.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The most general matrix transformation that will take this to any other state of the same form (complex, 2-d vector with unit norm) is a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix.  In [[Chapter 2 - Qubits and Collections of Qubits|Chapter 2]], several specific examples of qubit transformations were given; in [[Chapter 3 - Physics of Quantum Information|Chapter 3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|Section 3.4]] it was stated that an element of SU(2) can be written as (see [[Chapter 3 - Physics of Quantum Information#Exponentian of a Matrix|Section 3.2.1, Exponentiation of a Matrix]])&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
U(\theta) &amp;amp;= \exp(-i\vec{n}\cdot\vec{\sigma} \theta/2) \\&lt;br /&gt;
          &amp;amp;= (\mathbb{I}\cos(\theta/2) -i\vec{n}\cdot\vec{\sigma} \sin(\theta/2))&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.24}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; is a unit vector, &amp;lt;math&amp;gt;|\vec{n}|=1\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\vec{n}\cdot\vec{\sigma} =&lt;br /&gt;
n_1\sigma_1+n_2\sigma_2+n_3\sigma_3\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
Explicitly, this is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \exp(-i\vec{n}\cdot\vec{\sigma} \theta/2) &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; 1 \end{array}\right)\cos(\theta/2) \\&lt;br /&gt;
                        &amp;amp; \;\;\;   + (-i)\left[ n_1\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; 1 \\ &lt;br /&gt;
                                  1 &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_2\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; -i \\ &lt;br /&gt;
                                  i &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_3\left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; -1 \end{array}\right)\right]\sin(\theta/2) \\&lt;br /&gt;
                                &amp;amp;= &lt;br /&gt;
         \left(\begin{array}{cc}&lt;br /&gt;
  \cos(\theta/2) -in_3\sin(\theta/2) &amp;amp; (-in_1-n_2)\sin(\theta/2) \\ &lt;br /&gt;
   (-in_1+n_2)\sin(\theta/2) &amp;amp; \cos(\theta/2) +in_3\sin(\theta/2)  \end{array}\right).&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Notice that this is a ''special unitary matrix.''  (See Section [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Unitary Matrices]].)&lt;br /&gt;
To see that this is the most general SU(2) matrix, one needs to verify that any complex &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix can be written in this form.  (One way to do this is to start with a generic matrix and impose the restrictions.  Here one may simply convince oneself that this is general through observation by acting on basis vectors.)  This is the most general qubit transformation and can be interpreted as a rotation about the axis &amp;lt;math&amp;gt;\hat{n}\,\!&amp;lt;/math&amp;gt; by an angle &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Another parametrization of this set of matrices is the following, called the Euler angle parametrization:&lt;br /&gt;
 {{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U_{EA}   = \exp(-i\sigma_z \alpha/2) \exp(-i\sigma_y \beta/2) \exp(-i\sigma_z \gamma/2).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.25}}&lt;br /&gt;
In this case the matrices &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; are not unique.  Any two of the three Pauli matrices (or one of each) may be chosen.  This is quite simple, useful, and generalizable to SU(N) for N arbitrary.  In the simple case of a qubit, one may convince oneself by acting on basis vectors as before.  However, with a little thought, one may see that rotating to a position on the sphere by the first angle, followed by rotations using the other two, will provide for a general orientation of an object.&lt;br /&gt;
&lt;br /&gt;
====Similarity Transformation====&lt;br /&gt;
&lt;br /&gt;
A ''similarity transformation'' &amp;lt;!--\index{similarity transformation}--&amp;gt; &lt;br /&gt;
of an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; by an invertible matrix &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;S A S^{-1}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
There are (at least) two important things to note about similarity&lt;br /&gt;
transformations: &lt;br /&gt;
#Similarity transformations leave the trace of a matrix unchanged.  This is shown explicitly in [[#The Trace|Section 3.5]].&lt;br /&gt;
#Similarity transformations leave the determinant of a matrix unchanged, or invariant.  This is because &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(SAS^{-1}) = \det(S)\det(A)\det(S^{-1}) =\det(S)\det(A)\frac{1}{\det(S)} = \det(A). \,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
#Simultaneous similarity transformations of matrices in an equation will leave the equation unchanged.  Let &amp;lt;math&amp;gt;A^\prime = SAS^{-1}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B^\prime = SBS^{-1}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;C^\prime = SCS^{-1}\,\!&amp;lt;/math&amp;gt;.  If &amp;lt;math&amp;gt;AB=C\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;A^\prime B^\prime = C^\prime\,\!&amp;lt;/math&amp;gt;, since &amp;lt;math&amp;gt;A^\prime B^\prime = SAS^{-1}SBS^{-1} = SABS^{-1} =  SCS^{-1}=C^\prime\,\!&amp;lt;/math&amp;gt;.  The two matrices &amp;lt;math&amp;gt;A^\prime\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; are said to be ''similar''.&lt;br /&gt;
&amp;lt;!-- \index{similar matrices} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Eigenvalues and Eigenvectors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- \index{eigenvalues}\index{eigenvectors} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A matrix can always be diagonalized.  By this, it is meant that for&lt;br /&gt;
every complex matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; there is a diagonal matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; such that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
M = UDV,  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.26}}&lt;br /&gt;
where &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; are unitary matrices.  This form is called a singular value decomposition of the matrix and the entries of the diagonal matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; are called the ''singular values'' &amp;lt;!--\index{singular values}--&amp;gt; &lt;br /&gt;
of the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;.  However, the singular values are not always easy to find.  &lt;br /&gt;
&lt;br /&gt;
For the special case that the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is Hermitian &amp;lt;math&amp;gt;(M^\dagger = M)\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; can be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
M = U D U^\dagger&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.27}}&lt;br /&gt;
where &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is unitary &amp;lt;math&amp;gt;(U^{-1}=U^\dagger)\,\!&amp;lt;/math&amp;gt;.  In this case the elements&lt;br /&gt;
of the matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; are called ''eigenvalues''. &amp;lt;!--\index{eigenvalues}--&amp;gt;&lt;br /&gt;
Very often eigenvalues are introduced as solutions to the equation&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M \left\vert v\right\rangle = \lambda \left\vert v\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\left\vert v\right\rangle\,\!&amp;lt;/math&amp;gt; is an ''eigenvector''. &amp;lt;!--\index{eigenvector} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To find the eigenvalues and eigenvectors of a matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;, we follow a&lt;br /&gt;
standard procedure which is to calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\lambda\mathbb{I} - M) = 0&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.28}}&lt;br /&gt;
and then solve for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  The different solutions for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt; is the&lt;br /&gt;
set of eigenvalues and is called the ''spectrum''. &amp;lt;!-- \index{spectrum}--&amp;gt; Let the different eigenvalues be denoted by &amp;lt;math&amp;gt;\lambda_i\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i=1,2,...,n\,\!&amp;lt;/math&amp;gt; fo an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; vector.  If two&lt;br /&gt;
eigenvalues are equal, we say the spectrum is &lt;br /&gt;
''degenerate''. &amp;lt;!--\index{degenerate}--&amp;gt; To find the&lt;br /&gt;
eigenvectors, which correspond to different eigenvalues, the equation &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M \left\vert v\right\rangle = \lambda_i \left\vert v\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
must be solved for each value of &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;.  Notice that this equation&lt;br /&gt;
holds even if we multiply both sides by some complex number.  This&lt;br /&gt;
implies that an eigenvector can always be scaled.  Usually they are&lt;br /&gt;
normalized to obtain an orthonormal set.  As we will see by example,&lt;br /&gt;
degenerate eigenvalues require some care.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
&lt;br /&gt;
Consider a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\sigma = \left(\begin{array}{cc} &lt;br /&gt;
               1+a &amp;amp; b-ic \\&lt;br /&gt;
              b+ic &amp;amp; 1-a  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.29}}&lt;br /&gt;
To find the eigenvalues &amp;lt;!--\index{eigenvalues}--&amp;gt; &lt;br /&gt;
of this, we follow a standard procedure, which&lt;br /&gt;
is to calculate &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\sigma-\lambda\mathbb{I}) = 0,&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.30}}&lt;br /&gt;
and solve for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  The eigenvalues of this matrix are given by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{cc} &lt;br /&gt;
               1+a-\lambda &amp;amp; b-ic \\&lt;br /&gt;
              b+ic &amp;amp; 1-a-\lambda  \end{array}\right) =0,  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which implies that the eigenvalues are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_{\pm} = 1\pm \sqrt{a^2+b^2+c^2}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and the eigenvectors &amp;lt;!--\index{eigenvectors}--&amp;gt; are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_1=\left(\begin{array}{c}&lt;br /&gt;
        i\left(-a + c + \sqrt{a^2 + 4 b^2 - 2 ac + c^2} \right) \\ &lt;br /&gt;
        2b &lt;br /&gt;
        \end{array}\right), &lt;br /&gt;
v_2= \left(\begin{array}{c}&lt;br /&gt;
         i\left(-a + c - \sqrt {a^2 + 4 b^2 - 2 a c + c^2} \right)\\ &lt;br /&gt;
         2b \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
These expressions are useful for calculating properties of qubit&lt;br /&gt;
states as will be seen in the text.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
&lt;br /&gt;
Now consider a &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N= \left(\begin{array}{ccc}&lt;br /&gt;
              1 &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0 &amp;amp; 1 \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
First we calculate&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{ccc}&lt;br /&gt;
              1-\lambda &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i         &amp;amp; 1-\lambda  &amp;amp; 0 \\&lt;br /&gt;
              0         &amp;amp;       0    &amp;amp; 1-\lambda &lt;br /&gt;
           \end{array}\right) &lt;br /&gt;
    = (1-\lambda)[(1-\lambda)^2-1].&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This implies that the eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']] are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda = 1,0, \mbox{ or } 2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda_1=1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\lambda_0 = 0\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\lambda_2 = 2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
To find eigenvectors, &amp;lt;!--\index{eigenvalues}--&amp;gt; we calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Nv &amp;amp;= \lambda v, \\&lt;br /&gt;
\left(\begin{array}{ccc}&lt;br /&gt;
              1 &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0 &amp;amp; 1 \end{array}\right)\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right) &amp;amp;= \lambda\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.31}}&lt;br /&gt;
for each &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
For &amp;lt;math&amp;gt;\lambda = 1\,\!&amp;lt;/math&amp;gt;, we get the following equations:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1 -iv_2 &amp;amp;= v_1, \\&lt;br /&gt;
iv_1+v_2 &amp;amp;= v_2,  \\&lt;br /&gt;
v_3 &amp;amp;= v_3. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.32}}&lt;br /&gt;
Solving this obtains &amp;lt;math&amp;gt;v_2 =0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;v_1 =0\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;v_3\,\!&amp;lt;/math&amp;gt; is any non-zero number (which will be chosen to normalize the vector).  For &amp;lt;math&amp;gt;\lambda&lt;br /&gt;
=0\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1  &amp;amp;= iv_2, \\&lt;br /&gt;
v_3 &amp;amp;= 0. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.33}}&lt;br /&gt;
And finally, for &amp;lt;math&amp;gt;\lambda = 2\,\!&amp;lt;/math&amp;gt;, we obtain&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1 -iv_2 &amp;amp;= 2v_1, \\&lt;br /&gt;
iv_1+v_2 &amp;amp;= 2v_2, \\&lt;br /&gt;
v_3 &amp;amp;= 2v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.34}}&lt;br /&gt;
so that &amp;lt;math&amp;gt;v_1 = -iv_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
Therefore, our three eigenvectors &amp;lt;!--\index{eigenvalues}--&amp;gt; are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_0 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} i \\ 1\\ 0 \end{array}\right), \; &lt;br /&gt;
v_1 = \left(\begin{array}{c} 0 \\ 0\\ 1 \end{array}\right), \; &lt;br /&gt;
v_2 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} -i \\ 1\\ 0 \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The matrix &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
V= (v_0,v_1,v_2) = \left(\begin{array}{ccc}&lt;br /&gt;
              i/\sqrt{2} &amp;amp; 0     &amp;amp; -i/\sqrt{2} \\&lt;br /&gt;
              1/\sqrt{2} &amp;amp; 0     &amp;amp; 1/\sqrt{2} \\&lt;br /&gt;
              0          &amp;amp; 1     &amp;amp; 0 \end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
is the matrix that diagonalizes &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; in the following way:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N = VDV^\dagger&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
D = \left(\begin{array}{ccc}&lt;br /&gt;
              0 &amp;amp; 0  &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 1  &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0  &amp;amp; 2 \end{array}\right)&lt;br /&gt;
.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We may write this as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
V^\dagger N V = D.   &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is sometimes called the ''eigenvalue decompostion''&amp;lt;!--\index{eigenvalue decomposition}--&amp;gt;  of the matrix and can also be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
N = \sum_i \lambda_i v_iv^\dagger_i.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.35}}&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
&lt;br /&gt;
Next, consider the complex &amp;lt;math&amp;gt;3\times 3&amp;lt;/math&amp;gt; Hermitian matrix &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M = \left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2} &amp;amp; 0  &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0 &amp;amp; 2  &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2} &amp;amp; 0  &amp;amp; \frac{5}{2} \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
First we calculate&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2}-\lambda &amp;amp; 0 &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0         &amp;amp; 2-\lambda  &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2}         &amp;amp;       0    &amp;amp; \frac{5}{2}-\lambda &lt;br /&gt;
           \end{array}\right) &lt;br /&gt;
    = (2-\lambda)\left[\left(\frac{5}{2}-\lambda\right)^2-\frac{1}{4}\right].&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This implies that the eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']] are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda = 2,2, \mbox{ or } 3.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Note that there are two that are the same, or degenerate.  &lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda_1=2\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\lambda_2 = 2\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\lambda_3 = 3\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
To find eigenvectors, &amp;lt;!--\index{eigenvalues}--&amp;gt; we calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Mv &amp;amp;= \lambda v, \\&lt;br /&gt;
\left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2} &amp;amp; 0 &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0 &amp;amp; 2 &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2} &amp;amp; 0 &amp;amp; \frac{5}{2} \end{array}\right)\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right) &amp;amp;= \lambda\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.36}}&lt;br /&gt;
for each &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
For &amp;lt;math&amp;gt;\lambda = 3\,\!&amp;lt;/math&amp;gt;, we get the following equations:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{5}{2}v_1 + \frac{i}{2}v_3 &amp;amp;= 3v_1, \\&lt;br /&gt;
2v_2 &amp;amp;= 3v_2,  \\&lt;br /&gt;
-\frac{i}{2}v_1 + \frac{5}{2}v_3 &amp;amp;= 3v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.37}}&lt;br /&gt;
so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
iv_3  &amp;amp;= v_1, \\&lt;br /&gt;
v_2 &amp;amp;= 0. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.38}}&lt;br /&gt;
Now for &amp;lt;math&amp;gt;\lambda = 2\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{5}{2}v_1 +\frac{i}{2}v_3 &amp;amp;= 2 v_1, \\&lt;br /&gt;
2v_2 &amp;amp;= 2v_2, \\&lt;br /&gt;
-\frac{i}{2}v_1 + \frac{5}{2}v_3 &amp;amp;= 2v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.39}}&lt;br /&gt;
so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_3  &amp;amp;= iv_1, \\&lt;br /&gt;
v_2 &amp;amp;= \mbox{anything}. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.40}}&lt;br /&gt;
We would like to have a set of orthonormal vectors.  (We can always choose the set to be orthonormal.)  We choose the three eigenvectors to be&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_2 = \left(\begin{array}{c} 1 \\ a \\ i&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
v_2^\prime = \left(\begin{array}{c} 1 \\ a^\prime \\ i&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
v_3 = \left(\begin{array}{c} i \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We set the inner product of the two vectors &amp;lt;math&amp;gt; v_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_2^\prime \,\!&amp;lt;/math&amp;gt; equal to zero so as to have then be orthogonal: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
1 + a a^\prime +1 = 2 + a a^\prime = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now we can choose &amp;lt;math&amp;gt; a = \sqrt{2}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; a^\prime = -\sqrt{2}\,\!&amp;lt;/math&amp;gt; so that the normalized eigenvectors are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_2 = \frac{1}{2}\left(\begin{array}{c} 1 \\ \sqrt{2} \\ i&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
v_2^\prime = \frac{1}{2}\left(\begin{array}{c} 1 \\ -\sqrt{2} \\ i&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
v_3 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} i \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Tensor Products===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tensor product, &amp;lt;!--\index{tensor product} --&amp;gt;&lt;br /&gt;
or the Kronecker product, &amp;lt;!--\index{Kronecker product}--&amp;gt;&lt;br /&gt;
is used extensively in quantum mechanics and&lt;br /&gt;
throughout the course.  It is commonly denoted with a &amp;lt;math&amp;gt;\otimes\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
symbol, although this is often left out.  In fact, the following&lt;br /&gt;
are commonly found in the literature as notation for the tensor&lt;br /&gt;
product of two vectors &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert\Phi\right\rangle\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\Psi\right\rangle\otimes\left\vert\Phi\right\rangle &amp;amp;= \left\vert\Psi\right\rangle\left\vert\Phi\right\rangle  \\&lt;br /&gt;
                         &amp;amp;= \left\vert\Psi\Phi\right\rangle.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.41}}&lt;br /&gt;
Each of these has its advantages and will all be used in&lt;br /&gt;
different circumstances in this text.  &lt;br /&gt;
&lt;br /&gt;
The tensor product is also often used for operators.  Several&lt;br /&gt;
examples &lt;br /&gt;
will be given, one that explicitly calculates the tensor product for&lt;br /&gt;
two vectors and one that calculates it for two matrices which could&lt;br /&gt;
represent operators.  However, these are not different in the sense&lt;br /&gt;
that a vector is a &amp;lt;math&amp;gt;1\times n\,\!&amp;lt;/math&amp;gt; or an &amp;lt;math&amp;gt;n\times 1\,\!&amp;lt;/math&amp;gt; matrix.  It is also&lt;br /&gt;
noteworthy that the two objects in the tensor product need not be of&lt;br /&gt;
the same type.  In general, a tensor product of an &amp;lt;math&amp;gt;n\times m\,\!&amp;lt;/math&amp;gt; object&lt;br /&gt;
(array) with a &amp;lt;math&amp;gt;p\times q\,\!&amp;lt;/math&amp;gt; object will produce an &amp;lt;math&amp;gt;np\times mq\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
object.  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The tensor product of two objects is computed as follows.&lt;br /&gt;
Let &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; be an &amp;lt;math&amp;gt;n\times m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; be a &amp;lt;math&amp;gt;p\times q\,\!&amp;lt;/math&amp;gt; array, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
A = \left(\begin{array}{cccc} &lt;br /&gt;
           a_{11} &amp;amp; a_{12} &amp;amp; \cdots &amp;amp; a_{1m} \\&lt;br /&gt;
           a_{21} &amp;amp; a_{22} &amp;amp; \cdots &amp;amp; a_{2m} \\&lt;br /&gt;
           \vdots &amp;amp;        &amp;amp; \ddots &amp;amp;      \\&lt;br /&gt;
           a_{n1} &amp;amp; a_{n2} &amp;amp; \cdots &amp;amp; a_{nm} \end{array}\right),&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.42}}&lt;br /&gt;
and similarly for &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
A\otimes B = \left(\begin{array}{cccc} &lt;br /&gt;
             a_{11}B &amp;amp; a_{12}B &amp;amp; \cdots &amp;amp; a_{1m}B \\&lt;br /&gt;
             a_{21}B &amp;amp; a_{22}B &amp;amp; \cdots &amp;amp; a_{2m}B \\&lt;br /&gt;
             \vdots  &amp;amp;         &amp;amp; \ddots &amp;amp;      \\&lt;br /&gt;
             a_{n1}B &amp;amp; a_{n2}B &amp;amp; \cdots &amp;amp; a_{nm}B \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.43}}&lt;br /&gt;
&lt;br /&gt;
Let us now consider two examples.  First let &amp;lt;math&amp;gt;\left\vert\phi\right\rangle\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; be as before,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi\right\rangle = \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) \;\; &lt;br /&gt;
\mbox{and} \;\; &lt;br /&gt;
\left\vert\phi\right\rangle = \left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\psi\right\rangle\otimes\left\vert\phi\right\rangle &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) &lt;br /&gt;
\otimes &lt;br /&gt;
\left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{c} \alpha\gamma\\ &lt;br /&gt;
                                                     \alpha\delta \\&lt;br /&gt;
                                                     \beta\gamma \\ &lt;br /&gt;
                                                     \beta\delta &lt;br /&gt;
                                       \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.44}}&lt;br /&gt;
Also&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\psi\right\rangle\otimes\left\langle\phi\right\vert &amp;amp;= \left\vert\psi\right\rangle\left\langle\phi\right\vert \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) &lt;br /&gt;
\otimes &lt;br /&gt;
\left(\begin{array}{cc} \gamma^* &amp;amp; \delta^*&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                \alpha\gamma^* &amp;amp; \alpha\delta^* \\&lt;br /&gt;
                                \beta\gamma^*  &amp;amp; \beta\delta^* &lt;br /&gt;
                                       \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.45}}&lt;br /&gt;
&lt;br /&gt;
Now consider two matrices&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A = \left(\begin{array}{cc} &lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d  \end{array}\right) &lt;br /&gt;
 \;\; \mbox{and} \;\;&lt;br /&gt;
B = \left(\begin{array}{cc} &lt;br /&gt;
               e &amp;amp; f \\&lt;br /&gt;
               g &amp;amp; h  \end{array}\right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A\otimes B &amp;amp;=  \left(\begin{array}{cc} &lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d  \end{array}\right) &lt;br /&gt;
 \otimes&lt;br /&gt;
               \left(\begin{array}{cc} &lt;br /&gt;
                 e &amp;amp; f \\&lt;br /&gt;
                 g &amp;amp; h  \end{array}\right)   \\  &lt;br /&gt;
           &amp;amp;=  \left(\begin{array}{cccc} &lt;br /&gt;
                 ae &amp;amp; af &amp;amp; be &amp;amp; bf \\&lt;br /&gt;
                 ag &amp;amp; ah &amp;amp; bg &amp;amp; bh \\&lt;br /&gt;
                 ce &amp;amp; cf &amp;amp; de &amp;amp; df \\&lt;br /&gt;
                 cg &amp;amp; ch &amp;amp; dg &amp;amp; dh \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.46}}&lt;br /&gt;
&lt;br /&gt;
====Properties of Tensor Products====&lt;br /&gt;
&lt;br /&gt;
Listed here are properties of tensor products that are useful, with &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; of any type:&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)(C\otimes D) = AC \otimes BD\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)^T = A^T\otimes B^T\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)^* = A^*\otimes B^*\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)\otimes C = A\otimes(B\otimes C)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A+B) \otimes C = A\otimes C+B\otimes C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;A\otimes(B+C) = A\otimes B + A\otimes C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(A\otimes B) = \mbox{Tr}(A)\mbox{Tr}(B)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
(See [[Bibliography#HornNJohnsonII|Horn and Johnson, Topics in Matrix Analysis]], Chapter 4.)&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_C_-_Vectors_and_Linear_Algebra&amp;diff=1790</id>
		<title>Appendix C - Vectors and Linear Algebra</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_C_-_Vectors_and_Linear_Algebra&amp;diff=1790"/>
		<updated>2012-01-05T09:01:48Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* The Determinant */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
This appendix introduces some aspects of linear algebra and complex&lt;br /&gt;
algebra that will be helpful for the course.  In addition, Dirac&lt;br /&gt;
notation is introduced and explained.&lt;br /&gt;
&lt;br /&gt;
===Vectors===&lt;br /&gt;
&lt;br /&gt;
Here we review some facts about real vectors before discussing the representation and complex analogues used in quantum mechanics.  &lt;br /&gt;
&lt;br /&gt;
====Real Vectors====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The simple definition of a vector --- an object that has magnitude and&lt;br /&gt;
direction --- is helpful to keep in mind even when dealing with complex&lt;br /&gt;
and/or abstract vectors as we will here.  In three dimensional space,&lt;br /&gt;
a vector is often written as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = v_x\hat{x} + v_y \hat{y} + v_z\hat{z},&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where the hat (&amp;lt;math&amp;gt;\hat{\cdot}\,\!&amp;lt;/math&amp;gt;) denotes a unit vector and the components&lt;br /&gt;
&amp;lt;math&amp;gt;v_i\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i = x,y,z\,\!&amp;lt;/math&amp;gt; are just numbers.  The unit vectors are also&lt;br /&gt;
known as ''basis'' vectors. &amp;lt;!-- \index{basis vectors!real} --&amp;gt; &lt;br /&gt;
This is because any vector&lt;br /&gt;
in real three-dimensional space can be written in terms of these unit/basis vectors.  In&lt;br /&gt;
some sense they are the basic components of any vector.  Other basis&lt;br /&gt;
vectors could be used, however, such as in spherical and cylindrical coordinates.  When dealing with more abstract and/or complex vectors,&lt;br /&gt;
it is often helpful to ask what one would do for an ordinary&lt;br /&gt;
three-dimensional vector.  For example, properties of unit vectors,&lt;br /&gt;
dot products, etc. in three-dimensions are similar to the analogous&lt;br /&gt;
constructions in higher dimensions.  &lt;br /&gt;
&lt;br /&gt;
The ''inner product'',&amp;lt;!-- \index{inner product}--&amp;gt; or ''dot product'',&amp;lt;!--\index{dot product}--&amp;gt; for two real three-dimensional vectors,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = v_x\hat{x} + v_y \hat{y} + v_z\hat{z}, \;\; &lt;br /&gt;
\vec{w} = w_x\hat{x} + w_y \hat{y} + w_z\hat{z},&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
can be computed as follows:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}\cdot\vec{w} = v_xw_x + v_yw_y + v_zw_z.&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
For the inner product of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt; with itself, we get the square of&lt;br /&gt;
the magnitude of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt;, denoted &amp;lt;math&amp;gt;|\vec{v}|^2\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
|\vec{v}|^2 = \vec{v}\cdot\vec{v} = v_xv_x + v_yv_y +&lt;br /&gt;
v_zv_z=v_x^2+v_y^2+v_z^2. &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If we want a unit vector in the direction of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt;, we can simply divide it&lt;br /&gt;
by its magnitude:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{v} = \frac{\vec{v}}{|\vec{v}|}.  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now, of course, &amp;lt;math&amp;gt;\hat{v}\cdot\hat{v}= 1\,\!&amp;lt;/math&amp;gt;, which can easily be checked.  &lt;br /&gt;
&lt;br /&gt;
There are several ways to represent a vector.  The ones we will use&lt;br /&gt;
most often are column and row vector notations.  So, for example, we&lt;br /&gt;
could write the vector above as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = \left(\begin{array}{c} v_x \\ v_y \\ v_z&lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
In this case, our unit vectors are represented by the following: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{x} = \left(\begin{array}{c} 1 \\ 0 \\ 0&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
\hat{y} = \left(\begin{array}{c} 0 \\ 1 \\ 0&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
\hat{z} = \left(\begin{array}{c} 0 \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We next turn to the subject of complex vectors and the relevant&lt;br /&gt;
notation. &lt;br /&gt;
We will see how to compute the inner product later, since some other&lt;br /&gt;
definitions are required.&lt;br /&gt;
&lt;br /&gt;
====Complex Vectors====&lt;br /&gt;
&lt;br /&gt;
For complex vectors in quantum mechanics, Dirac notation is used most often.  This notation uses a &amp;lt;math&amp;gt;\left\vert \cdot \right\rangle\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
called a ''ket'', for a vector.  So our vector &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt; would be&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert v \right\rangle  = \left(\begin{array}{c} v_x \\ v_y \\ v_z&lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For qubits, i.e. two-state quantum systems, complex vectors will often be used:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align} \left\vert \psi \right\rangle &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta &lt;br /&gt;
  \end{array}\right) \\&lt;br /&gt;
           &amp;amp;=\alpha \left\vert 0\right\rangle + \beta\left\vert 1\right\rangle,\end{align}&amp;lt;/math&amp;gt;|C.1}} &lt;br /&gt;
where&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert 0\right\rangle = \left(\begin{array}{c} 1 \\ 0 &lt;br /&gt;
  \end{array}\right), \;\;\mbox{and} \;\;&lt;br /&gt;
\left\vert 1\right\rangle = \left(\begin{array}{c} 0 \\ 1 &lt;br /&gt;
  \end{array}\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
are the basis vectors.  The two numbers &amp;lt;math&amp;gt;\alpha\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; are complex numbers, so the vector is said to&lt;br /&gt;
be a complex vector.&lt;br /&gt;
&lt;br /&gt;
====Inner Product====&lt;br /&gt;
&lt;br /&gt;
Now let us suppose we have another complex vector,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \phi \right\rangle  = \left(\begin{array}{c} \gamma \\ \delta &lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The ''inner product'' between two vectors is often written as &amp;lt;math&amp;gt;\left\langle \phi \vert \psi \right\rangle \;\! &amp;lt;/math&amp;gt;, which is the same as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align} (\left\vert \phi \right\rangle )^\dagger\left\vert \psi \right\rangle &lt;br /&gt;
&amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right)^\dagger&lt;br /&gt;
\left(\begin{array}{c} \alpha \\ \beta   \end{array}\right) \\&lt;br /&gt;
           &amp;amp;= \left(\begin{array}{cc} \gamma^* &amp;amp; \delta^* \end{array}\right) \left(\begin{array}{c} \alpha \\ \beta   \end{array}\right) \\  &lt;br /&gt;
           &amp;amp;= \gamma^*\alpha + \delta^*\beta \end{align} \;\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Outer Product====&lt;br /&gt;
&lt;br /&gt;
The ''outer product'' between these same two vectors is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\begin{align} (\left\vert \phi \right\rangle )(\left\vert \psi \right\rangle)^\dagger &lt;br /&gt;
 &amp;amp;=  \left\vert \phi \right\rangle \left\langle \psi \right\vert \\&lt;br /&gt;
&amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right)&lt;br /&gt;
\left(\begin{array}{c} \alpha \\ \beta   \end{array}\right)^\dagger \\&lt;br /&gt;
           &amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right) \left(\begin{array}{cc} \alpha^* &amp;amp; \beta^*   \end{array}\right) \\  &lt;br /&gt;
           &amp;amp;=   \left(\begin{array}{cc} \gamma\alpha^* &amp;amp; \gamma\beta^* \\  \delta\alpha^* &amp;amp; \delta\beta^*  \end{array}\right) \end{align}\;\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Linear Algebra: Matrices===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are many aspects of linear algebra that are quite useful in&lt;br /&gt;
quantum mechanics.  We will briefly discuss several of these aspects here.&lt;br /&gt;
First, some definitions and properties are provided that will&lt;br /&gt;
be useful.  Some familiarity with matrices&lt;br /&gt;
will be assumed, although many basic definitions are also included.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Let us denote some &amp;lt;math&amp;gt;m\times n\,\!&amp;lt;/math&amp;gt; matrix by &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.  The set of all &amp;lt;math&amp;gt;m\times&lt;br /&gt;
n\,\!&amp;lt;/math&amp;gt; matrices with real entries is &amp;lt;math&amp;gt;M(n\times m,\mathbb{R})\,\!&amp;lt;/math&amp;gt;.  Such matrices&lt;br /&gt;
are said to be real since they have all real entries.  Similarly, the&lt;br /&gt;
set of &amp;lt;math&amp;gt;m\times n\,\!&amp;lt;/math&amp;gt; complex matrices is &amp;lt;math&amp;gt;M(m\times n,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.  For the&lt;br /&gt;
set of set of square &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; complex matrices, we simply write&lt;br /&gt;
&amp;lt;math&amp;gt;M(n,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We will also refer to the set of matrix elements, &amp;lt;math&amp;gt;a_{ij}\,\!&amp;lt;/math&amp;gt;, where the&lt;br /&gt;
first index (&amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; in this case) labels the row and the second &amp;lt;math&amp;gt;(j)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
labels the column.  Thus the element &amp;lt;math&amp;gt;a_{23}\,\!&amp;lt;/math&amp;gt; is the element in the&lt;br /&gt;
second row and third column.  A comma is inserted if there is some&lt;br /&gt;
ambiguity.  For example, in a large matrix the element in the&lt;br /&gt;
2nd row and 12th&lt;br /&gt;
column is written as &amp;lt;math&amp;gt;a_{2,12}\,\!&amp;lt;/math&amp;gt; to distinguish between the&lt;br /&gt;
21st row and 2nd column.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Complex Conjugate====&lt;br /&gt;
&lt;br /&gt;
The ''complex conjugate of a matrix'' &amp;lt;!-- \index{complex conjugate!of a matrix}--&amp;gt;&lt;br /&gt;
is the matrix with each element replaced by its complex conjugate.  In&lt;br /&gt;
other words, to take the complex conjugate of a matrix, one takes the&lt;br /&gt;
complex conjugate of each entry in the matrix.  We denote the complex&lt;br /&gt;
conjugate with a star, like this: &amp;lt;math&amp;gt;A^*\,\!&amp;lt;/math&amp;gt;.  For example,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A^* &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{12} &amp;amp; a_{13} \\&lt;br /&gt;
        a_{21} &amp;amp; a_{22} &amp;amp; a_{23} \\&lt;br /&gt;
        a_{31} &amp;amp; a_{32} &amp;amp; a_{33} \end{array}\right)^*  \\&lt;br /&gt;
    &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11}^* &amp;amp; a_{12}^* &amp;amp; a_{13}^* \\&lt;br /&gt;
        a_{21}^* &amp;amp; a_{22}^* &amp;amp; a_{23}^* \\&lt;br /&gt;
        a_{31}^* &amp;amp; a_{32}^* &amp;amp; a_{33}^* \end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.2}}&lt;br /&gt;
(Notice that the notation for a matrix is a capital letter, whereas&lt;br /&gt;
the entries are represented by lower case&lt;br /&gt;
letters.)&lt;br /&gt;
&lt;br /&gt;
====Transpose====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''transpose'' &amp;lt;!-- \index{transpose} --&amp;gt; of a matrix is the same set of&lt;br /&gt;
elements, but now the first row becomes the first column, the second row&lt;br /&gt;
becomes the second column, and so on.  Thus the rows and columns are&lt;br /&gt;
interchanged.  For example, for a square &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix, the&lt;br /&gt;
transpose is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A^T &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{12} &amp;amp; a_{13} \\&lt;br /&gt;
        a_{21} &amp;amp; a_{22} &amp;amp; a_{23} \\&lt;br /&gt;
        a_{31} &amp;amp; a_{32} &amp;amp; a_{33} \end{array}\right)^T \\&lt;br /&gt;
    &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{21} &amp;amp; a_{31} \\&lt;br /&gt;
        a_{12} &amp;amp; a_{22} &amp;amp; a_{32} \\&lt;br /&gt;
        a_{13} &amp;amp; a_{23} &amp;amp; a_{33} \end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.3}}&lt;br /&gt;
&lt;br /&gt;
====Hermitian Conjugate====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The complex conjugate and transpose of a matrix is called the ''Hermitian conjugate'', or simply the ''dagger'' of a matrix.  It is called the dagger because the symbol used to denote it,&lt;br /&gt;
(&amp;lt;math&amp;gt;\dagger\,\!&amp;lt;/math&amp;gt;):&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(A^T)^* = (A^*)^T \equiv A^\dagger.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.4}}&lt;br /&gt;
For our &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; example, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A^\dagger = \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11}^* &amp;amp; a_{21}^* &amp;amp; a_{31}^* \\&lt;br /&gt;
        a_{12}^* &amp;amp; a_{22}^* &amp;amp; a_{32}^* \\&lt;br /&gt;
        a_{13}^* &amp;amp; a_{23}^* &amp;amp; a_{33}^* \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If a matrix is its own Hermitian conjugate, i.e. &amp;lt;math&amp;gt;A^\dagger = A\,\!&amp;lt;/math&amp;gt;, then&lt;br /&gt;
we call it a ''Hermitian matrix''.  &amp;lt;!-- \index{Hermitian matrix}--&amp;gt;&lt;br /&gt;
(Clearly this is only possible for square matrices.) Hermitian&lt;br /&gt;
matrices are very important in quantum mechanics since their&lt;br /&gt;
eigenvalues are real.  (See Sec.([[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|Eigenvalues and Eigenvectors]]).)&lt;br /&gt;
&lt;br /&gt;
====Index Notation====&lt;br /&gt;
&lt;br /&gt;
Very often we write the product of two matrices &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; simply as&lt;br /&gt;
&amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;C=AB\,\!&amp;lt;/math&amp;gt;.  However, it is also quite useful to write this&lt;br /&gt;
in component form.  In this case, if these are &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrices, the component form will be &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ik} = \sum_{j=1}^n a_{ij}b_{jk}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This says that the element in the &amp;lt;math&amp;gt;i^{\mbox{th}}\,\!&amp;lt;/math&amp;gt; row and&lt;br /&gt;
&amp;lt;math&amp;gt;j^{\mbox{th}}\,\!&amp;lt;/math&amp;gt; column of the matrix &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the sum &amp;lt;math&amp;gt;\sum_{j=1}^n&lt;br /&gt;
a_{ij}b_{jk}\,\!&amp;lt;/math&amp;gt;.  The transpose of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; has elements&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ki} = \sum_{j=1}^n a_{kj}b_{ji}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now if we were to transpose &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; as well, this would read&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ki} = \sum_{j=1}^n (a_{jk})^T (b_{ij})^T = \sum_{j=1}^n b^T_{ij} a^T_{jk}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This gives us a way of seeing the general rule that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
C^T = B^TA^T.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It follows that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
C^\dagger = B^\dagger A^\dagger.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Trace====&lt;br /&gt;
&lt;br /&gt;
The ''trace'' &amp;lt;!-- \index{trace}--&amp;gt; of a matrix is the sum of the diagonal&lt;br /&gt;
elements and is denoted &amp;lt;math&amp;gt;\mbox{Tr}\,\!&amp;lt;/math&amp;gt;.  So for example, the trace of an&lt;br /&gt;
&amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(A) = \sum_{i=1}^n a_{ii}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some useful properties of the trace are the following:&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(AB) = \mbox{Tr}(BA)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(A + B) = \mbox{Tr}(A) + \mbox{Tr}(B)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Using the first of these results,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(UAU^{-1}) = \mbox{Tr}(U^{-1}UA) = \mbox{Tr}(A).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This relation is used so often that we state it here explicitly.&lt;br /&gt;
&lt;br /&gt;
====The Determinant====&lt;br /&gt;
&lt;br /&gt;
For a square matrix, the determinant is quite a useful thing.  For&lt;br /&gt;
example, an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix is invertible if and only if its&lt;br /&gt;
determinant is not zero.  So let us define the determinant and give&lt;br /&gt;
some properties and examples.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''determinant''&amp;lt;!--\index{determinant}--&amp;gt; of a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrix, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
N = \left(\begin{array}{cc}&lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d \end{array}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.5}}&lt;br /&gt;
is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(N) = ad-bc.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.6}}&lt;br /&gt;
Higher-order determinants can be written in terms of smaller ones in&lt;br /&gt;
the standard way.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''determinant''&amp;lt;!-- \index{determinant}--&amp;gt; of a matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be&lt;br /&gt;
also be written in terms of its components as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = \sum_{i,j,k,l,...} \epsilon_{ijkl...}&lt;br /&gt;
a_{1i}a_{2j}a_{3k}a_{4l} ...,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.7}}&lt;br /&gt;
where the symbol &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{ijkl...} = \begin{cases}&lt;br /&gt;
                       +1, \; \mbox{if } \; ijkl... = 1234... (\mbox{in order, or any even number of permutations}),\\&lt;br /&gt;
                       -1, \; \mbox{if } \; ijkl... = 2134... (\mbox{or any odd number of permutations}),\\&lt;br /&gt;
                       \;\;\; 0, \; \mbox{otherwise}, \; (\mbox{meaning any index is repeated}).&lt;br /&gt;
                      \end{cases}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.8}}&lt;br /&gt;
&lt;br /&gt;
Let us consider the example of the &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; given&lt;br /&gt;
above.  The determinant can be calculated by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = \sum_{i,j,k} \epsilon_{ijk}&lt;br /&gt;
a_{1i}a_{2j}a_{3k},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where, explicitly, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{ijk} = \begin{cases}&lt;br /&gt;
                       +1, \;\mbox{if }\; ijk= 123,231,\; \mbox{or}\; 312, (\mbox{These are even permutations of }123),\\&lt;br /&gt;
                       -1, \;\mbox{if }\; ijk = 213,132,\;\mbox{or}\;321(\mbox{These are odd permuations of }123),\\&lt;br /&gt;
                    \;\;\;  0, \; \mbox{otherwise}, \; (\mbox{meaning any index is repeated}),&lt;br /&gt;
                 \end{cases}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.9}}&lt;br /&gt;
so that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\det(A) &amp;amp;=&amp;amp; \epsilon_{123}a_{11}a_{22}a_{33} &lt;br /&gt;
         +\epsilon_{132}a_{11}a_{23}a_{32}&lt;br /&gt;
         +\epsilon_{231}a_{12}a_{23}a_{31}  \\&lt;br /&gt;
       &amp;amp;&amp;amp;+\epsilon_{213}a_{12}a_{21}a_{33}&lt;br /&gt;
         +\epsilon_{312}a_{13}a_{21}a_{32}&lt;br /&gt;
         +\epsilon_{312}a_{13}a_{21}a_{32}.&lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.10}}&lt;br /&gt;
Now given the values of &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; in [[#eqC.9|Eq. C.9]],&lt;br /&gt;
this is&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = a_{11}a_{22}a_{33} - a_{11}a_{23}a_{32} + a_{12}a_{23}a_{31} &lt;br /&gt;
         - a_{12}a_{21}a_{33} + a_{13}a_{21}a_{32} - a_{13}a_{21}a_{32}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The determinant has several properties that are useful to know.  A few are listed here:  &lt;br /&gt;
#The determinant of the transpose of a matrix is the same as the determinant of the matrix itself: &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(A) = \det(A^T).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
#The determinant of a product is the product of determinants:    &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(AB) = \det(A)\det(B).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
From this last property, another specific property can be derived.&lt;br /&gt;
If we take the determinant of the product of a matrix and its&lt;br /&gt;
inverse, we find&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(U U^{-1}) = \det(U)\det(U^{-1}) = \det(\mathbb{I}) = 1,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
since the determinant of the identity is one.  This implies that&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(U^{-1}) = \frac{1}{\det(U)}.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Inverse of a Matrix====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The inverse &amp;lt;!-- \index{inverse}--&amp;gt; of a square matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is another matrix,&lt;br /&gt;
denoted &amp;lt;math&amp;gt;A^{-1}\,\!&amp;lt;/math&amp;gt;, such that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
AA^{-1} = A^{-1}A = \mathbb{I},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; is the identity matrix consisting of zeroes everywhere&lt;br /&gt;
except the diagonal, which has ones.  For example, the &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}_3 = \left(\begin{array}{ccc} 1 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; 1 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is important to note that ''a matrix is invertible if and only if its determinant is nonzero.''  Thus one only needs to calculate the&lt;br /&gt;
determinant to see if a matrix has an inverse or not.&lt;br /&gt;
&lt;br /&gt;
====Hermitian Matrices====&lt;br /&gt;
&lt;br /&gt;
Hermitian matrices are important for a variety of reasons; primarily, it is because their eigenvalues are real.  Thus Hermitian matrices are used to represent density operators and density matrices, as well as Hamiltonians.  The density operator is a positive semi-definite Hermitian matrix (it has no negative eigenvalues) that has its trace equal to one.  In any case, it is often desirable to represent &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices using a real linear combination of a complete set of &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices.  A set of &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices is complete if any Hermitian matrix can be represented in terms of the set.  Let &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt; be a complete set.  Then any Hermitian matrix can be represented by &amp;lt;math&amp;gt;\sum_i a_i \lambda_i\,\!&amp;lt;/math&amp;gt;.  The set can always be taken to be a set of traceless Hermitian matrices and the identity matrix.  This is convenient for the density matrix (its trace is one) because the identity part of an &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrix is &amp;lt;math&amp;gt;(1/N)\mathbb{I}\,\!&amp;lt;/math&amp;gt; if we take all others in the set to be traceless.  For the Hamiltonian, the set consists of a traceless part and an identity part where identity part just gives an overall phase which can often be neglected.  &lt;br /&gt;
&lt;br /&gt;
One example of such a set which is extremely useful is the set of Pauli matrices.  These are discussed in detail in [[Chapter 2 - Qubits and Collections of Qubits|Chapter 2]] and in particular in [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|Section 2.4]].&lt;br /&gt;
&lt;br /&gt;
====Unitary Matrices====&lt;br /&gt;
&lt;br /&gt;
A ''unitary matrix'' &amp;lt;!-- \index{unitary matrix} --&amp;gt; &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is one whose&lt;br /&gt;
inverse is also its Hermitian conjugate, &amp;lt;math&amp;gt;U^\dagger = U^{-1}\,\!&amp;lt;/math&amp;gt;, so that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
U^\dagger U = UU^\dagger = \mathbb{I}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If the unitary matrix also has determinant one, it is said to be ''a special unitary matrix''.&amp;lt;!-- \index{special unitary matrix}--&amp;gt;  The set of&lt;br /&gt;
&amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; unitary matrices is denoted&lt;br /&gt;
&amp;lt;math&amp;gt;U(n)\,\!&amp;lt;/math&amp;gt; and the set of special unitary matrices is denoted &amp;lt;math&amp;gt;SU(n)\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Unitary matrices are particularly important in quantum mechanics&lt;br /&gt;
because they describe the evolution of quantum states.&lt;br /&gt;
They have this ability due to the fact that the rows and columns of unitary matrices (viewed as vectors) are orthonormal. (This is made clear in an example below.)  This means that when&lt;br /&gt;
they act on a basis vector of the form&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \left\vert j\right\rangle = &lt;br /&gt;
 \left(\begin{array}{c} 0 \\ 0 \\ \vdots \\ 1 \\ \vdots \\ 0 &lt;br /&gt;
  \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.11}}&lt;br /&gt;
with a single 1, in say the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th spot, and zeroes everywhere else, the result is a normalized complex vector.  Acting on a set of&lt;br /&gt;
orthonormal vectors of the form given in Eq.[[#eqC.11|(C.11)]]&lt;br /&gt;
will produce another orthonormal set.  &lt;br /&gt;
&lt;br /&gt;
Let us consider the example of a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U = \left(\begin{array}{cc} &lt;br /&gt;
              a &amp;amp; b \\ &lt;br /&gt;
              c &amp;amp; d &lt;br /&gt;
           \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.12}}&lt;br /&gt;
The inverse of this matrix is the Hermitian conjugate, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U ^{-1} = U^\dagger = \left(\begin{array}{cc} &lt;br /&gt;
                         a^* &amp;amp; c^* \\ &lt;br /&gt;
                         b^* &amp;amp; d^* &lt;br /&gt;
                       \end{array}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.13}}&lt;br /&gt;
provided that the matrix &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; satisfies the constraints&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|a|^2 + |b|^2 = 1, \; &amp;amp; \; ac^*+bd^* =0  \\&lt;br /&gt;
ca^*+db^*=0,  \;      &amp;amp;  \; |c|^2 + |d|^2 =1,&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|C.14}}&lt;br /&gt;
and&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|a|^2 + |c|^2 = 1, \; &amp;amp; \; ba^*+dc^* =0  \\&lt;br /&gt;
b^*a+d^*c=0,  \;      &amp;amp;  \; |b|^2 + |d|^2 =1.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|C.15}}&lt;br /&gt;
Looking at each row as a vector, the constraints in&lt;br /&gt;
Eq.[[#eqC.14|(C.14)]] are the orthonormality conditions for the&lt;br /&gt;
vectors forming the rows.  Similarly, the constraints in&lt;br /&gt;
Eq.[[#eqC.15|(C.15)]] are the orthonormality conditions for the&lt;br /&gt;
vectors forming the columns.&lt;br /&gt;
&lt;br /&gt;
===More Dirac Notation===&lt;br /&gt;
&lt;br /&gt;
Now that we have a definition for the Hermitian conjugate, we consider the&lt;br /&gt;
case for a &amp;lt;math&amp;gt;1\times n\,\!&amp;lt;/math&amp;gt; matrix, i.e. a vector.  In Dirac notation, this is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi\right\rangle = \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The Hermitian conjugate comes up so often that we use the following&lt;br /&gt;
notation for vectors:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle \psi\right\vert = (\left\vert\psi\right\rangle)^\dagger = \left(\begin{array}{c} \alpha \\&lt;br /&gt;
    \beta \end{array}\right)^\dagger &lt;br /&gt;
 = \left( \alpha^*, \; \beta^* \right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a row vector and in Dirac notation is denoted by the symbol &amp;lt;math&amp;gt;\left\langle\cdot \right\vert\!&amp;lt;/math&amp;gt;, which is called a ''bra''.  Let us consider a second complex vector, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\phi\right\rangle = \left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The ''inner product'' &amp;lt;!-- \index{inner product}--&amp;gt; between &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; and &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert\phi\right\rangle\,\!&amp;lt;/math&amp;gt; is computed as follows:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \left\langle\phi\mid\psi\right\rangle &amp;amp; \equiv (\left\vert\phi\right\rangle)^\dagger\left\vert\psi \right\rangle   \\&lt;br /&gt;
                  &amp;amp;= (\gamma^*,\delta^*) \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                  &amp;amp;= \gamma^*\alpha + \delta^*\beta.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.16}}&lt;br /&gt;
If these two vectors are ''orthogonal'', &amp;lt;!-- \index{orthogonal!vectors} --&amp;gt;&lt;br /&gt;
then their inner product is zero, or &amp;lt;math&amp;gt;\left\langle\phi\mid\psi\right\rangle =0\,\!&amp;lt;/math&amp;gt;.  (The &amp;lt;math&amp;gt; \left\langle\phi\mid\psi\right\rangle \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
is called a ''bracket'', which is the product of the ''bra'' and the ''ket''.)  The inner product of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; with itself is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\psi\mid\psi\right\rangle = |\alpha|^2 + |\beta|^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This vector is considered normalized when &amp;lt;math&amp;gt;\left\langle\psi\mid\psi\right\rangle = 1\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
More generally, we will consider vectors in &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; dimensions.  In this&lt;br /&gt;
case we write the vector in terms of a set of basis vectors,&lt;br /&gt;
&amp;lt;math&amp;gt;\{\left\vert i\right\rangle\}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;i = 0,1,2,...N-1\,\!&amp;lt;/math&amp;gt;.  This is an ordered set of&lt;br /&gt;
vectors which are labeled simply by integers.  If the set is orthogonal,&lt;br /&gt;
then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid j\right\rangle = 0, \;\; \mbox{for all }i\neq j.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If they are normalized, then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i \mid i \right\rangle = 1, \;\;\mbox{for all } i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If both of these are true, i.e. the entire set is orthonormal, we can&lt;br /&gt;
write&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid j\right\rangle = \delta_{ij}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where the symbol &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; is called the Kronecker delta &amp;lt;!-- \index{Kronecker delta} --&amp;gt; and is defined by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\delta_{ij} = \begin{cases}&lt;br /&gt;
               1, &amp;amp; \mbox{if } i=j, \\&lt;br /&gt;
               0, &amp;amp; \mbox{if } i\neq j.&lt;br /&gt;
              \end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.17}}&lt;br /&gt;
Now consider &amp;lt;math&amp;gt;(N+1)\,\!&amp;lt;/math&amp;gt;-dimensional vectors by letting two such vectors&lt;br /&gt;
be expressed in the same basis as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \Psi\right\rangle = \sum_{i=0}^{N} \alpha_i\left\vert i\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\Phi\right\rangle = \sum_{j=0}^{N} \beta_j\left\vert j\right\rangle.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then the inner product &amp;lt;!--\index{inner product}--&amp;gt; is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\langle\Psi\mid\Phi\right\rangle &amp;amp;= \left(\sum_{i=0}^{N}&lt;br /&gt;
             \alpha_i\left\vert i\right\rangle\right)^\dagger\left(\sum_{j=0}^{N} \beta_j\left\vert j\right\rangle\right)  \\&lt;br /&gt;
                 &amp;amp;= \sum_{ij} \alpha_i^*\beta_j\left\langle i\mid j\right\rangle  \\&lt;br /&gt;
                 &amp;amp;= \sum_{ij} \alpha_i^*\beta_j\delta_{ij}  \\&lt;br /&gt;
                 &amp;amp;= \sum_i\alpha^*_i\beta_i,&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.18}}&lt;br /&gt;
where the fact that the delta function is zero unless&lt;br /&gt;
&amp;lt;math&amp;gt;i=j\,\!&amp;lt;/math&amp;gt; is used to obtain the last equality.  Taking the inner product of a vector&lt;br /&gt;
with itself will get&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\Psi\mid\Psi\right\rangle = \sum_i\alpha^*_i\alpha_i = \sum_i|\alpha_i|^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This immediately gives us a very important property of the inner&lt;br /&gt;
product.  It tells us that, in general,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\Phi\mid\Phi\right\rangle \geq 0, \;\; \mbox{and} \;\; \left\langle\Phi\mid \Phi\right\rangle = 0&lt;br /&gt;
\Leftrightarrow \left\vert\Phi\right\rangle = 0. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
(The symbol &amp;lt;math&amp;gt;\Leftrightarrow\,\!&amp;lt;/math&amp;gt; means &amp;lt;nowiki&amp;gt;&amp;quot;if and only if,&amp;quot;&amp;lt;/nowiki&amp;gt; sometimes written as &amp;lt;nowiki&amp;gt;&amp;quot;iff.&amp;quot;&amp;lt;/nowiki&amp;gt;)  &lt;br /&gt;
&lt;br /&gt;
We could also expand a vector in a different basis.  Let us suppose&lt;br /&gt;
that the set &amp;lt;math&amp;gt;\{\left\vert e_k \right\rangle\}\,\!&amp;lt;/math&amp;gt; is an orthonormal basis &amp;lt;math&amp;gt;(\left\langle e_k \mid e_l\right\rangle =&lt;br /&gt;
\delta_{kl})\,\!&amp;lt;/math&amp;gt; that is different from the one considered earlier.  We&lt;br /&gt;
could expand our vector &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt; in terms of our new basis by&lt;br /&gt;
expanding our new basis in terms of our old basis.  Let us first&lt;br /&gt;
expand the &amp;lt;math&amp;gt;\left\vert e_k\right\rangle\,\!&amp;lt;/math&amp;gt; in terms of the &amp;lt;math&amp;gt;\left\vert j\right\rangle\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert e_k\right\rangle= \sum_j \left\vert j\right\rangle \left\langle j\mid e_k\right\rangle,&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.19}}&lt;br /&gt;
so that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert \Psi\right\rangle &amp;amp;= \sum_j \alpha_j\left\vert j\right\rangle  \\&lt;br /&gt;
           &amp;amp;= \sum_{j}\sum_k\alpha_j\left\vert e_k \right\rangle \left\langle e_k \mid j\right\rangle  \\ &lt;br /&gt;
           &amp;amp;= \sum_k \alpha_k^\prime \left\vert e_k\right\rangle, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.20}}&lt;br /&gt;
where &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\alpha_k^\prime = \sum_j \alpha_j \left\langle e_k \mid j\right\rangle. &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.21}}&lt;br /&gt;
Notice that the insertion of &amp;lt;math&amp;gt;\sum_k\left\vert e_k\right\rangle\left\langle e_k\right\vert\,\!&amp;lt;/math&amp;gt; didn't do anything to our original vector; it is the same vector, just in a&lt;br /&gt;
different basis.  Therefore, this is effectively the identity operator,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I} = \sum_k\left\vert e_k \right\rangle\left\langle e_k\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is an important and quite useful relation.  &lt;br /&gt;
To interpret Eq.[[#eqC.19|(C.19)]], we can draw a close&lt;br /&gt;
analogy with three-dimensional real vectors.  The inner product&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert e_k \right\rangle \left\langle j \right\vert\,\!&amp;lt;/math&amp;gt; can be interpreted as the projection of one vector onto&lt;br /&gt;
another.  This provides the part of &amp;lt;math&amp;gt;\left\vert j \right\rangle\,\!&amp;lt;/math&amp;gt; along &amp;lt;math&amp;gt;\left\vert e_k \right\rangle\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Transformations===&lt;br /&gt;
&lt;br /&gt;
Suppose we have two different orthogonal bases, &amp;lt;math&amp;gt;\{e_k\}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{j\}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The numbers &amp;lt;math&amp;gt;\left\langle e_k\mid j\right\rangle\,\!&amp;lt;/math&amp;gt; for all the different &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; are&lt;br /&gt;
often referred to as matrix elements since the set forms a matrix, with&lt;br /&gt;
&amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; labelling the rows and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; labelling the columns.  Thus we&lt;br /&gt;
can represent the transformation from one basis to another with a matrix&lt;br /&gt;
transformation.  Let &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; be the matrix with elements &amp;lt;math&amp;gt;m_{kj} =&lt;br /&gt;
\left\langle e_k\mid j\right\rangle\,\!&amp;lt;/math&amp;gt;.  The transformation from one basis to another,&lt;br /&gt;
written in terms of the coefficients of &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; A^\prime = MA, &amp;lt;/math&amp;gt;|C.22}}&lt;br /&gt;
where &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A^\prime = \left(\begin{array}{c} \alpha_1^\prime \\ \alpha_2^\prime \\ \vdots \\&lt;br /&gt;
    \alpha_n^\prime \end{array}\right), \;\; &lt;br /&gt;
\mbox{ and } \;\;&lt;br /&gt;
A = \left(\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\&lt;br /&gt;
    \alpha_n\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This sort of transformation is a change of basis.  Oftentimes when one vector is transformed to another the transformation can be viewed as a transformation of the components of the vector and is also represented by a matrix.  Thus transformations can either be&lt;br /&gt;
represented by the matrix equation, like Eq.[[#eqC.22|(C.22)]], or the components, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\alpha_k^\prime = \sum_j \alpha_j \left\langle e_k \mid j \right\rangle = \sum_j m_{kj}\alpha_j. &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.23}}&lt;br /&gt;
In the case that we consider a matrix transformation of basis elements, we call it a passive transformation.  (The transformation does nothing to the object, but only changes the basis in which the object is described.)  An active transformation is one where the object itself is transformed.  Often these two transformations, active and passive, are very simply related.  However, the distinction can be very important.  &lt;br /&gt;
&lt;br /&gt;
For a general transformation matrix &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; acting on a vector,&lt;br /&gt;
the matrix elements in a particular basis &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
t_{ij} = \left\langle i\right\vert (T) \left\vert j\right\rangle, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
just as elements of a vector can be found using&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid \Psi \right\rangle = \alpha_i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Transformations of a Qubit====&lt;br /&gt;
&lt;br /&gt;
It is worth belaboring the point somewhat and presenting several ways in which to parametrize the set of transformations of a qubit.  A qubit state is represented by a complex two-dimensional vector that has been normalized to one:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \psi\right\rangle = \alpha_0 \left\vert 0 \right\rangle + \alpha_1 \left\vert 1\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1 \end{array}\right), \;\;\;\; |\alpha_0|^2 + |\alpha_1|^2 = 1.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The most general matrix transformation that will take this to any other state of the same form (complex, 2-d vector with unit norm) is a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix.  In [[Chapter 2 - Qubits and Collections of Qubits|Chapter 2]], several specific examples of qubit transformations were given; in [[Chapter 3 - Physics of Quantum Information|Chapter 3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|Section 3.4]] it was stated that an element of SU(2) can be written as (see [[Chapter 3 - Physics of Quantum Information#Exponentian of a Matrix|Section 3.2.1, Exponentiation of a Matrix]])&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
U(\theta) &amp;amp;= \exp(-i\vec{n}\cdot\vec{\sigma} \theta/2) \\&lt;br /&gt;
          &amp;amp;= (\mathbb{I}\cos(\theta/2) -i\vec{n}\cdot\vec{\sigma} \sin(\theta/2))&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.24}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; is a unit vector, &amp;lt;math&amp;gt;|\vec{n}|=1\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\vec{n}\cdot\vec{\sigma} =&lt;br /&gt;
n_1\sigma_1+n_2\sigma_2+n_3\sigma_3\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
Explicitly, this is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \exp(-i\vec{n}\cdot\vec{\sigma} \theta/2) &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; 1 \end{array}\right)\cos(\theta/2) \\&lt;br /&gt;
                        &amp;amp; \;\;\;   + (-i)\left[ n_1\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; 1 \\ &lt;br /&gt;
                                  1 &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_2\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; -i \\ &lt;br /&gt;
                                  i &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_3\left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; -1 \end{array}\right)\right]\sin(\theta/2) \\&lt;br /&gt;
                                &amp;amp;= &lt;br /&gt;
         \left(\begin{array}{cc}&lt;br /&gt;
  \cos(\theta/2) -in_3\sin(\theta/2) &amp;amp; (-in_1-n_2)\sin(\theta/2) \\ &lt;br /&gt;
   (-in_1+n_2)\sin(\theta/2) &amp;amp; \cos(\theta/2) +in_3\sin(\theta/2)  \end{array}\right).&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Notice that this is a ''special unitary matrix.''  (See Section [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Unitary Matrices]].)&lt;br /&gt;
To see that this is the most general SU(2) matrix, one needs to verify that any complex &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix can be written in this form.  (One way to do this is to start with a generic matrix and impose the restrictions.  Here one may simply convince oneself that this is general through observation by acting on basis vectors.)  This is the most general qubit transformation and can be interpreted as a rotation about the axis &amp;lt;math&amp;gt;\hat{n}\,\!&amp;lt;/math&amp;gt; by an angle &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Another parametrization of this set of matrices is the following, called the Euler angle parametrization:&lt;br /&gt;
 {{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U_{EA}   = \exp(-i\sigma_z \alpha/2) \exp(-i\sigma_y \beta/2) \exp(-i\sigma_z \gamma/2).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.25}}&lt;br /&gt;
In this case the matrices &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; are not unique.  Any two of the three Pauli matrices (or one of each) may be chosen.  This is quite simple, useful, and generalizable to SU(N) for N arbitrary.  In the simple case of a qubit, one may convince oneself by acting on basis vectors as before.  However, with a little thought, one may see that rotating to a position on the sphere by the first angle, followed by rotations using the other two, will provide for a general orientation of an object.&lt;br /&gt;
&lt;br /&gt;
====Similarity Transformation====&lt;br /&gt;
&lt;br /&gt;
A ''similarity transformation'' &amp;lt;!--\index{similarity transformation}--&amp;gt; &lt;br /&gt;
of an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; by an invertible matrix &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;S A S^{-1}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
There are (at least) two important things to note about similarity&lt;br /&gt;
transformations: &lt;br /&gt;
#Similarity transformations leave the trace of a matrix unchanged.  This is shown explicitly in [[#The Trace|Section 3.5]].&lt;br /&gt;
#Similarity transformations leave the determinant of a matrix unchanged, or invariant.  This is because &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(SAS^{-1}) = \det(S)\det(A)\det(S^{-1}) =\det(S)\det(A)\frac{1}{\det(S)} = \det(A). \,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
#Simultaneous similarity transformations of matrices in an equation will leave the equation unchanged.  Let &amp;lt;math&amp;gt;A^\prime = SAS^{-1}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B^\prime = SBS^{-1}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;C^\prime = SCS^{-1}\,\!&amp;lt;/math&amp;gt;.  If &amp;lt;math&amp;gt;AB=C\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;A^\prime B^\prime = C^\prime\,\!&amp;lt;/math&amp;gt;, since &amp;lt;math&amp;gt;A^\prime B^\prime = SAS^{-1}SBS^{-1} = SABS^{-1} =  SCS^{-1}=C^\prime\,\!&amp;lt;/math&amp;gt;.  The two matrices &amp;lt;math&amp;gt;A^\prime\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; are said to be ''similar''.&lt;br /&gt;
&amp;lt;!-- \index{similar matrices} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Eigenvalues and Eigenvectors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- \index{eigenvalues}\index{eigenvectors} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A matrix can always be diagonalized.  By this, it is meant that for&lt;br /&gt;
every complex matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; there is a diagonal matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; such that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
M = UDV,  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.26}}&lt;br /&gt;
where &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; are unitary matrices.  This form is called a singular value decomposition of the matrix and the entries of the diagonal matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; are called the ''singular values'' &amp;lt;!--\index{singular values}--&amp;gt; &lt;br /&gt;
of the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;.  However, the singular values are not always easy to find.  &lt;br /&gt;
&lt;br /&gt;
For the special case that the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is Hermitian &amp;lt;math&amp;gt;(M^\dagger = M)\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; can be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
M = U D U^\dagger&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.27}}&lt;br /&gt;
where &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is unitary &amp;lt;math&amp;gt;(U^{-1}=U^\dagger)\,\!&amp;lt;/math&amp;gt;.  In this case the elements&lt;br /&gt;
of the matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; are called ''eigenvalues''. &amp;lt;!--\index{eigenvalues}--&amp;gt;&lt;br /&gt;
Very often eigenvalues are introduced as solutions to the equation&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M \left\vert v\right\rangle = \lambda \left\vert v\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\left\vert v\right\rangle\,\!&amp;lt;/math&amp;gt; is an ''eigenvector''. &amp;lt;!--\index{eigenvector} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To find the eigenvalues and eigenvectors of a matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;, we follow a&lt;br /&gt;
standard procedure which is to calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\lambda\mathbb{I} - M) = 0&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.28}}&lt;br /&gt;
and then solve for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  The different solutions for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt; is the&lt;br /&gt;
set of eigenvalues and is called the ''spectrum''. &amp;lt;!-- \index{spectrum}--&amp;gt; Let the different eigenvalues be denoted by &amp;lt;math&amp;gt;\lambda_i\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i=1,2,...,n\,\!&amp;lt;/math&amp;gt; fo an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; vector.  If two&lt;br /&gt;
eigenvalues are equal, we say the spectrum is &lt;br /&gt;
''degenerate''. &amp;lt;!--\index{degenerate}--&amp;gt; To find the&lt;br /&gt;
eigenvectors, which correspond to different eigenvalues, the equation &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M \left\vert v\right\rangle = \lambda_i \left\vert v\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
must be solved for each value of &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;.  Notice that this equation&lt;br /&gt;
holds even if we multiply both sides by some complex number.  This&lt;br /&gt;
implies that an eigenvector can always be scaled.  Usually they are&lt;br /&gt;
normalized to obtain an orthonormal set.  As we will see by example,&lt;br /&gt;
degenerate eigenvalues require some care.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
&lt;br /&gt;
Consider a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\sigma = \left(\begin{array}{cc} &lt;br /&gt;
               1+a &amp;amp; b-ic \\&lt;br /&gt;
              b+ic &amp;amp; 1-a  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.29}}&lt;br /&gt;
To find the eigenvalues &amp;lt;!--\index{eigenvalues}--&amp;gt; &lt;br /&gt;
of this, we follow a standard procedure, which&lt;br /&gt;
is to calculate &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\sigma-\lambda\mathbb{I}) = 0,&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.30}}&lt;br /&gt;
and solve for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  The eigenvalues of this matrix are given by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{cc} &lt;br /&gt;
               1+a-\lambda &amp;amp; b-ic \\&lt;br /&gt;
              b+ic &amp;amp; 1-a-\lambda  \end{array}\right) =0,  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which implies that the eigenvalues are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_{\pm} = 1\pm \sqrt{a^2+b^2+c^2}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and the eigenvectors &amp;lt;!--\index{eigenvectors}--&amp;gt; are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_1=\left(\begin{array}{c}&lt;br /&gt;
        i\left(-a + c + \sqrt{a^2 + 4 b^2 - 2 ac + c^2} \right) \\ &lt;br /&gt;
        2b &lt;br /&gt;
        \end{array}\right), &lt;br /&gt;
v_2= \left(\begin{array}{c}&lt;br /&gt;
         i\left(-a + c - \sqrt {a^2 + 4 b^2 - 2 a c + c^2} \right)\\ &lt;br /&gt;
         2b \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
These expressions are useful for calculating properties of qubit&lt;br /&gt;
states as will be seen in the text.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
&lt;br /&gt;
Now consider a &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N= \left(\begin{array}{ccc}&lt;br /&gt;
              1 &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0 &amp;amp; 1 \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
First we calculate&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{ccc}&lt;br /&gt;
              1-\lambda &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i         &amp;amp; 1-\lambda  &amp;amp; 0 \\&lt;br /&gt;
              0         &amp;amp;       0    &amp;amp; 1-\lambda &lt;br /&gt;
           \end{array}\right) &lt;br /&gt;
    = (1-\lambda)[(1-\lambda)^2-1].&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This implies that the eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']] are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda = 1,0, \mbox{ or } 2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda_1=1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\lambda_0 = 0\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\lambda_2 = 2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
To find eigenvectors, &amp;lt;!--\index{eigenvalues}--&amp;gt; we calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Nv &amp;amp;= \lambda v, \\&lt;br /&gt;
\left(\begin{array}{ccc}&lt;br /&gt;
              1 &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0 &amp;amp; 1 \end{array}\right)\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right) &amp;amp;= \lambda\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.31}}&lt;br /&gt;
for each &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
For &amp;lt;math&amp;gt;\lambda = 1\,\!&amp;lt;/math&amp;gt;, we get the following equations:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1 -iv_2 &amp;amp;= v_1, \\&lt;br /&gt;
iv_1+v_2 &amp;amp;= v_2,  \\&lt;br /&gt;
v_3 &amp;amp;= v_3. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.32}}&lt;br /&gt;
Solving this obtains &amp;lt;math&amp;gt;v_2 =0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;v_1 =0\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;v_3\,\!&amp;lt;/math&amp;gt; is any non-zero number (which will be chosen to normalize the vector).  For &amp;lt;math&amp;gt;\lambda&lt;br /&gt;
=0\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1  &amp;amp;= iv_2, \\&lt;br /&gt;
v_3 &amp;amp;= 0. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.33}}&lt;br /&gt;
And finally, for &amp;lt;math&amp;gt;\lambda = 2\,\!&amp;lt;/math&amp;gt;, we obtain&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1 -iv_2 &amp;amp;= 2v_1, \\&lt;br /&gt;
iv_1+v_2 &amp;amp;= 2v_2, \\&lt;br /&gt;
v_3 &amp;amp;= 2v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.34}}&lt;br /&gt;
so that &amp;lt;math&amp;gt;v_1 = -iv_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
Therefore, our three eigenvectors &amp;lt;!--\index{eigenvalues}--&amp;gt; are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_0 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} i \\ 1\\ 0 \end{array}\right), \; &lt;br /&gt;
v_1 = \left(\begin{array}{c} 0 \\ 0\\ 1 \end{array}\right), \; &lt;br /&gt;
v_2 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} -i \\ 1\\ 0 \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The matrix &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
V= (v_0,v_1,v_2) = \left(\begin{array}{ccc}&lt;br /&gt;
              i/\sqrt{2} &amp;amp; 0     &amp;amp; -i/\sqrt{2} \\&lt;br /&gt;
              1/\sqrt{2} &amp;amp; 0     &amp;amp; 1/\sqrt{2} \\&lt;br /&gt;
              0          &amp;amp; 1     &amp;amp; 0 \end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
is the matrix that diagonalizes &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; in the following way:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N = VDV^\dagger&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
D = \left(\begin{array}{ccc}&lt;br /&gt;
              0 &amp;amp; 0  &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 1  &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0  &amp;amp; 2 \end{array}\right)&lt;br /&gt;
.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We may write this as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
V^\dagger N V = D.   &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is sometimes called the ''eigenvalue decompostion''&amp;lt;!--\index{eigenvalue decomposition}--&amp;gt;  of the matrix and can also be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
N = \sum_i \lambda_i v_iv^\dagger_i.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.35}}&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
&lt;br /&gt;
Next, consider the complex &amp;lt;math&amp;gt;3\times 3&amp;lt;/math&amp;gt; Hermitian matrix &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M = \left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2} &amp;amp; 0  &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0 &amp;amp; 2  &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2} &amp;amp; 0  &amp;amp; \frac{5}{2} \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
First we calculate&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2}-\lambda &amp;amp; 0 &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0         &amp;amp; 2-\lambda  &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2}         &amp;amp;       0    &amp;amp; \frac{5}{2}-\lambda &lt;br /&gt;
           \end{array}\right) &lt;br /&gt;
    = (2-\lambda)\left[\left(\frac{5}{2}-\lambda\right)^2-\frac{1}{4}\right].&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This implies that the eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']] are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda = 2,2, \mbox{ or } 3.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Note that there are two that are the same, or degenerate.  &lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda_1=2\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\lambda_2 = 2\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\lambda_3 = 3\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
To find eigenvectors, &amp;lt;!--\index{eigenvalues}--&amp;gt; we calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Mv &amp;amp;= \lambda v, \\&lt;br /&gt;
\left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2} &amp;amp; 0 &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0 &amp;amp; 2 &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2} &amp;amp; 0 &amp;amp; \frac{5}{2} \end{array}\right)\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right) &amp;amp;= \lambda\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.36}}&lt;br /&gt;
for each &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
For &amp;lt;math&amp;gt;\lambda = 3\,\!&amp;lt;/math&amp;gt;, we get the following equations:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{5}{2}v_1 + \frac{i}{2}v_3 &amp;amp;= 3v_1, \\&lt;br /&gt;
2v_2 &amp;amp;= 3v_2,  \\&lt;br /&gt;
-\frac{i}{2}v_1 + \frac{5}{2}v_3 &amp;amp;= 3v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.37}}&lt;br /&gt;
so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
iv_3  &amp;amp;= v_1, \\&lt;br /&gt;
v_2 &amp;amp;= 0. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.38}}&lt;br /&gt;
Now for &amp;lt;math&amp;gt;\lambda = 2\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{5}{2}v_1 +\frac{i}{2}v_3 &amp;amp;= 2 v_1, \\&lt;br /&gt;
2v_2 &amp;amp;= 2v_2, \\&lt;br /&gt;
-\frac{i}{2}v_1 + \frac{5}{2}v_3 &amp;amp;= 2v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.39}}&lt;br /&gt;
so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_3  &amp;amp;= iv_1, \\&lt;br /&gt;
v_2 &amp;amp;= \mbox{anything}. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.40}}&lt;br /&gt;
We would like to have a set of orthonormal vectors.  (We can always choose the set to be orthonormal.)  We choose the three eigenvectors to be&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_2 = \left(\begin{array}{c} 1 \\ a \\ i&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
v_2^\prime = \left(\begin{array}{c} 1 \\ a^\prime \\ i&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
v_3 = \left(\begin{array}{c} i \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We set the inner product of the two vectors &amp;lt;math&amp;gt; v_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_2^\prime \,\!&amp;lt;/math&amp;gt; equal to zero so as to have then be orthogonal: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
1 + a a^\prime +1 = 2 + a a^\prime = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now we can choose &amp;lt;math&amp;gt; a = \sqrt{2}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; a^\prime = -\sqrt{2}\,\!&amp;lt;/math&amp;gt; so that the normalized eigenvectors are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_2 = \frac{1}{2}\left(\begin{array}{c} 1 \\ \sqrt{2} \\ i&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
v_2^\prime = \frac{1}{2}\left(\begin{array}{c} 1 \\ -\sqrt{2} \\ i&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
v_3 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} i \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Tensor Products===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tensor product, &amp;lt;!--\index{tensor product} --&amp;gt;&lt;br /&gt;
or the Kronecker product, &amp;lt;!--\index{Kronecker product}--&amp;gt;&lt;br /&gt;
is used extensively in quantum mechanics and&lt;br /&gt;
throughout the course.  It is commonly denoted with a &amp;lt;math&amp;gt;\otimes\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
symbol, although this is often left out.  In fact, the following&lt;br /&gt;
are commonly found in the literature as notation for the tensor&lt;br /&gt;
product of two vectors &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert\Phi\right\rangle\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\Psi\right\rangle\otimes\left\vert\Phi\right\rangle &amp;amp;= \left\vert\Psi\right\rangle\left\vert\Phi\right\rangle  \\&lt;br /&gt;
                         &amp;amp;= \left\vert\Psi\Phi\right\rangle.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.41}}&lt;br /&gt;
Each of these has its advantages and will all be used in&lt;br /&gt;
different circumstances in this text.  &lt;br /&gt;
&lt;br /&gt;
The tensor product is also often used for operators.  Several&lt;br /&gt;
examples &lt;br /&gt;
will be given, one that explicitly calculates the tensor product for&lt;br /&gt;
two vectors and one that calculates it for two matrices which could&lt;br /&gt;
represent operators.  However, these are not different in the sense&lt;br /&gt;
that a vector is a &amp;lt;math&amp;gt;1\times n\,\!&amp;lt;/math&amp;gt; or an &amp;lt;math&amp;gt;n\times 1\,\!&amp;lt;/math&amp;gt; matrix.  It is also&lt;br /&gt;
noteworthy that the two objects in the tensor product need not be of&lt;br /&gt;
the same type.  In general, a tensor product of an &amp;lt;math&amp;gt;n\times m\,\!&amp;lt;/math&amp;gt; object&lt;br /&gt;
(array) with a &amp;lt;math&amp;gt;p\times q\,\!&amp;lt;/math&amp;gt; object will produce an &amp;lt;math&amp;gt;np\times mq\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
object.  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The tensor product of two objects is computed as follows.&lt;br /&gt;
Let &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; be an &amp;lt;math&amp;gt;n\times m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; be a &amp;lt;math&amp;gt;p\times q\,\!&amp;lt;/math&amp;gt; array, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
A = \left(\begin{array}{cccc} &lt;br /&gt;
           a_{11} &amp;amp; a_{12} &amp;amp; \cdots &amp;amp; a_{1m} \\&lt;br /&gt;
           a_{21} &amp;amp; a_{22} &amp;amp; \cdots &amp;amp; a_{2m} \\&lt;br /&gt;
           \vdots &amp;amp;        &amp;amp; \ddots &amp;amp;      \\&lt;br /&gt;
           a_{n1} &amp;amp; a_{n2} &amp;amp; \cdots &amp;amp; a_{nm} \end{array}\right),&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.42}}&lt;br /&gt;
and similarly for &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
A\otimes B = \left(\begin{array}{cccc} &lt;br /&gt;
             a_{11}B &amp;amp; a_{12}B &amp;amp; \cdots &amp;amp; a_{1m}B \\&lt;br /&gt;
             a_{21}B &amp;amp; a_{22}B &amp;amp; \cdots &amp;amp; a_{2m}B \\&lt;br /&gt;
             \vdots  &amp;amp;         &amp;amp; \ddots &amp;amp;      \\&lt;br /&gt;
             a_{n1}B &amp;amp; a_{n2}B &amp;amp; \cdots &amp;amp; a_{nm}B \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.43}}&lt;br /&gt;
&lt;br /&gt;
Let us now consider two examples.  First let &amp;lt;math&amp;gt;\left\vert\phi\right\rangle\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; be as before,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi\right\rangle = \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) \;\; &lt;br /&gt;
\mbox{and} \;\; &lt;br /&gt;
\left\vert\phi\right\rangle = \left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\psi\right\rangle\otimes\left\vert\phi\right\rangle &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) &lt;br /&gt;
\otimes &lt;br /&gt;
\left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{c} \alpha\gamma\\ &lt;br /&gt;
                                                     \alpha\delta \\&lt;br /&gt;
                                                     \beta\gamma \\ &lt;br /&gt;
                                                     \beta\delta &lt;br /&gt;
                                       \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.44}}&lt;br /&gt;
Also&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\psi\right\rangle\otimes\left\langle\phi\right\vert &amp;amp;= \left\vert\psi\right\rangle\left\langle\phi\right\vert \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) &lt;br /&gt;
\otimes &lt;br /&gt;
\left(\begin{array}{cc} \gamma^* &amp;amp; \delta^*&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                \alpha\gamma^* &amp;amp; \alpha\delta^* \\&lt;br /&gt;
                                \beta\gamma^*  &amp;amp; \beta\delta^* &lt;br /&gt;
                                       \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.45}}&lt;br /&gt;
&lt;br /&gt;
Now consider two matrices&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A = \left(\begin{array}{cc} &lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d  \end{array}\right) &lt;br /&gt;
 \;\; \mbox{and} \;\;&lt;br /&gt;
B = \left(\begin{array}{cc} &lt;br /&gt;
               e &amp;amp; f \\&lt;br /&gt;
               g &amp;amp; h  \end{array}\right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A\otimes B &amp;amp;=  \left(\begin{array}{cc} &lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d  \end{array}\right) &lt;br /&gt;
 \otimes&lt;br /&gt;
               \left(\begin{array}{cc} &lt;br /&gt;
                 e &amp;amp; f \\&lt;br /&gt;
                 g &amp;amp; h  \end{array}\right)   \\  &lt;br /&gt;
           &amp;amp;=  \left(\begin{array}{cccc} &lt;br /&gt;
                 ae &amp;amp; af &amp;amp; be &amp;amp; bf \\&lt;br /&gt;
                 ag &amp;amp; ah &amp;amp; bg &amp;amp; bh \\&lt;br /&gt;
                 ce &amp;amp; cf &amp;amp; de &amp;amp; df \\&lt;br /&gt;
                 cg &amp;amp; ch &amp;amp; dg &amp;amp; dh \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.46}}&lt;br /&gt;
&lt;br /&gt;
====Properties of Tensor Products====&lt;br /&gt;
&lt;br /&gt;
Listed here are properties of tensor products that are useful, with &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; of any type:&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)(C\otimes D) = AC \otimes BD\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)^T = A^T\otimes B^T\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)^* = A^*\otimes B^*\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)\otimes C = A\otimes(B\otimes C)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A+B) \otimes C = A\otimes C+B\otimes C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;A\otimes(B+C) = A\otimes B + A\otimes C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(A\otimes B) = \mbox{Tr}(A)\mbox{Tr}(B)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
(See [[Bibliography#HornNJohnsonII|Horn and Johnson, Topics in Matrix Analysis]], Chapter 4.)&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_C_-_Vectors_and_Linear_Algebra&amp;diff=1789</id>
		<title>Appendix C - Vectors and Linear Algebra</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_C_-_Vectors_and_Linear_Algebra&amp;diff=1789"/>
		<updated>2012-01-05T08:55:54Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Index Notation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
This appendix introduces some aspects of linear algebra and complex&lt;br /&gt;
algebra that will be helpful for the course.  In addition, Dirac&lt;br /&gt;
notation is introduced and explained.&lt;br /&gt;
&lt;br /&gt;
===Vectors===&lt;br /&gt;
&lt;br /&gt;
Here we review some facts about real vectors before discussing the representation and complex analogues used in quantum mechanics.  &lt;br /&gt;
&lt;br /&gt;
====Real Vectors====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The simple definition of a vector --- an object that has magnitude and&lt;br /&gt;
direction --- is helpful to keep in mind even when dealing with complex&lt;br /&gt;
and/or abstract vectors as we will here.  In three dimensional space,&lt;br /&gt;
a vector is often written as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = v_x\hat{x} + v_y \hat{y} + v_z\hat{z},&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where the hat (&amp;lt;math&amp;gt;\hat{\cdot}\,\!&amp;lt;/math&amp;gt;) denotes a unit vector and the components&lt;br /&gt;
&amp;lt;math&amp;gt;v_i\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i = x,y,z\,\!&amp;lt;/math&amp;gt; are just numbers.  The unit vectors are also&lt;br /&gt;
known as ''basis'' vectors. &amp;lt;!-- \index{basis vectors!real} --&amp;gt; &lt;br /&gt;
This is because any vector&lt;br /&gt;
in real three-dimensional space can be written in terms of these unit/basis vectors.  In&lt;br /&gt;
some sense they are the basic components of any vector.  Other basis&lt;br /&gt;
vectors could be used, however, such as in spherical and cylindrical coordinates.  When dealing with more abstract and/or complex vectors,&lt;br /&gt;
it is often helpful to ask what one would do for an ordinary&lt;br /&gt;
three-dimensional vector.  For example, properties of unit vectors,&lt;br /&gt;
dot products, etc. in three-dimensions are similar to the analogous&lt;br /&gt;
constructions in higher dimensions.  &lt;br /&gt;
&lt;br /&gt;
The ''inner product'',&amp;lt;!-- \index{inner product}--&amp;gt; or ''dot product'',&amp;lt;!--\index{dot product}--&amp;gt; for two real three-dimensional vectors,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = v_x\hat{x} + v_y \hat{y} + v_z\hat{z}, \;\; &lt;br /&gt;
\vec{w} = w_x\hat{x} + w_y \hat{y} + w_z\hat{z},&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
can be computed as follows:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}\cdot\vec{w} = v_xw_x + v_yw_y + v_zw_z.&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
For the inner product of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt; with itself, we get the square of&lt;br /&gt;
the magnitude of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt;, denoted &amp;lt;math&amp;gt;|\vec{v}|^2\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
|\vec{v}|^2 = \vec{v}\cdot\vec{v} = v_xv_x + v_yv_y +&lt;br /&gt;
v_zv_z=v_x^2+v_y^2+v_z^2. &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If we want a unit vector in the direction of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt;, we can simply divide it&lt;br /&gt;
by its magnitude:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{v} = \frac{\vec{v}}{|\vec{v}|}.  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now, of course, &amp;lt;math&amp;gt;\hat{v}\cdot\hat{v}= 1\,\!&amp;lt;/math&amp;gt;, which can easily be checked.  &lt;br /&gt;
&lt;br /&gt;
There are several ways to represent a vector.  The ones we will use&lt;br /&gt;
most often are column and row vector notations.  So, for example, we&lt;br /&gt;
could write the vector above as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = \left(\begin{array}{c} v_x \\ v_y \\ v_z&lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
In this case, our unit vectors are represented by the following: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{x} = \left(\begin{array}{c} 1 \\ 0 \\ 0&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
\hat{y} = \left(\begin{array}{c} 0 \\ 1 \\ 0&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
\hat{z} = \left(\begin{array}{c} 0 \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We next turn to the subject of complex vectors and the relevant&lt;br /&gt;
notation. &lt;br /&gt;
We will see how to compute the inner product later, since some other&lt;br /&gt;
definitions are required.&lt;br /&gt;
&lt;br /&gt;
====Complex Vectors====&lt;br /&gt;
&lt;br /&gt;
For complex vectors in quantum mechanics, Dirac notation is used most often.  This notation uses a &amp;lt;math&amp;gt;\left\vert \cdot \right\rangle\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
called a ''ket'', for a vector.  So our vector &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt; would be&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert v \right\rangle  = \left(\begin{array}{c} v_x \\ v_y \\ v_z&lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For qubits, i.e. two-state quantum systems, complex vectors will often be used:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align} \left\vert \psi \right\rangle &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta &lt;br /&gt;
  \end{array}\right) \\&lt;br /&gt;
           &amp;amp;=\alpha \left\vert 0\right\rangle + \beta\left\vert 1\right\rangle,\end{align}&amp;lt;/math&amp;gt;|C.1}} &lt;br /&gt;
where&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert 0\right\rangle = \left(\begin{array}{c} 1 \\ 0 &lt;br /&gt;
  \end{array}\right), \;\;\mbox{and} \;\;&lt;br /&gt;
\left\vert 1\right\rangle = \left(\begin{array}{c} 0 \\ 1 &lt;br /&gt;
  \end{array}\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
are the basis vectors.  The two numbers &amp;lt;math&amp;gt;\alpha\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; are complex numbers, so the vector is said to&lt;br /&gt;
be a complex vector.&lt;br /&gt;
&lt;br /&gt;
====Inner Product====&lt;br /&gt;
&lt;br /&gt;
Now let us suppose we have another complex vector,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \phi \right\rangle  = \left(\begin{array}{c} \gamma \\ \delta &lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The ''inner product'' between two vectors is often written as &amp;lt;math&amp;gt;\left\langle \phi \vert \psi \right\rangle \;\! &amp;lt;/math&amp;gt;, which is the same as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align} (\left\vert \phi \right\rangle )^\dagger\left\vert \psi \right\rangle &lt;br /&gt;
&amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right)^\dagger&lt;br /&gt;
\left(\begin{array}{c} \alpha \\ \beta   \end{array}\right) \\&lt;br /&gt;
           &amp;amp;= \left(\begin{array}{cc} \gamma^* &amp;amp; \delta^* \end{array}\right) \left(\begin{array}{c} \alpha \\ \beta   \end{array}\right) \\  &lt;br /&gt;
           &amp;amp;= \gamma^*\alpha + \delta^*\beta \end{align} \;\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Outer Product====&lt;br /&gt;
&lt;br /&gt;
The ''outer product'' between these same two vectors is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\begin{align} (\left\vert \phi \right\rangle )(\left\vert \psi \right\rangle)^\dagger &lt;br /&gt;
 &amp;amp;=  \left\vert \phi \right\rangle \left\langle \psi \right\vert \\&lt;br /&gt;
&amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right)&lt;br /&gt;
\left(\begin{array}{c} \alpha \\ \beta   \end{array}\right)^\dagger \\&lt;br /&gt;
           &amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right) \left(\begin{array}{cc} \alpha^* &amp;amp; \beta^*   \end{array}\right) \\  &lt;br /&gt;
           &amp;amp;=   \left(\begin{array}{cc} \gamma\alpha^* &amp;amp; \gamma\beta^* \\  \delta\alpha^* &amp;amp; \delta\beta^*  \end{array}\right) \end{align}\;\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Linear Algebra: Matrices===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are many aspects of linear algebra that are quite useful in&lt;br /&gt;
quantum mechanics.  We will briefly discuss several of these aspects here.&lt;br /&gt;
First, some definitions and properties are provided that will&lt;br /&gt;
be useful.  Some familiarity with matrices&lt;br /&gt;
will be assumed, although many basic definitions are also included.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Let us denote some &amp;lt;math&amp;gt;m\times n\,\!&amp;lt;/math&amp;gt; matrix by &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.  The set of all &amp;lt;math&amp;gt;m\times&lt;br /&gt;
n\,\!&amp;lt;/math&amp;gt; matrices with real entries is &amp;lt;math&amp;gt;M(n\times m,\mathbb{R})\,\!&amp;lt;/math&amp;gt;.  Such matrices&lt;br /&gt;
are said to be real since they have all real entries.  Similarly, the&lt;br /&gt;
set of &amp;lt;math&amp;gt;m\times n\,\!&amp;lt;/math&amp;gt; complex matrices is &amp;lt;math&amp;gt;M(m\times n,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.  For the&lt;br /&gt;
set of set of square &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; complex matrices, we simply write&lt;br /&gt;
&amp;lt;math&amp;gt;M(n,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We will also refer to the set of matrix elements, &amp;lt;math&amp;gt;a_{ij}\,\!&amp;lt;/math&amp;gt;, where the&lt;br /&gt;
first index (&amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; in this case) labels the row and the second &amp;lt;math&amp;gt;(j)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
labels the column.  Thus the element &amp;lt;math&amp;gt;a_{23}\,\!&amp;lt;/math&amp;gt; is the element in the&lt;br /&gt;
second row and third column.  A comma is inserted if there is some&lt;br /&gt;
ambiguity.  For example, in a large matrix the element in the&lt;br /&gt;
2nd row and 12th&lt;br /&gt;
column is written as &amp;lt;math&amp;gt;a_{2,12}\,\!&amp;lt;/math&amp;gt; to distinguish between the&lt;br /&gt;
21st row and 2nd column.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Complex Conjugate====&lt;br /&gt;
&lt;br /&gt;
The ''complex conjugate of a matrix'' &amp;lt;!-- \index{complex conjugate!of a matrix}--&amp;gt;&lt;br /&gt;
is the matrix with each element replaced by its complex conjugate.  In&lt;br /&gt;
other words, to take the complex conjugate of a matrix, one takes the&lt;br /&gt;
complex conjugate of each entry in the matrix.  We denote the complex&lt;br /&gt;
conjugate with a star, like this: &amp;lt;math&amp;gt;A^*\,\!&amp;lt;/math&amp;gt;.  For example,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A^* &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{12} &amp;amp; a_{13} \\&lt;br /&gt;
        a_{21} &amp;amp; a_{22} &amp;amp; a_{23} \\&lt;br /&gt;
        a_{31} &amp;amp; a_{32} &amp;amp; a_{33} \end{array}\right)^*  \\&lt;br /&gt;
    &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11}^* &amp;amp; a_{12}^* &amp;amp; a_{13}^* \\&lt;br /&gt;
        a_{21}^* &amp;amp; a_{22}^* &amp;amp; a_{23}^* \\&lt;br /&gt;
        a_{31}^* &amp;amp; a_{32}^* &amp;amp; a_{33}^* \end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.2}}&lt;br /&gt;
(Notice that the notation for a matrix is a capital letter, whereas&lt;br /&gt;
the entries are represented by lower case&lt;br /&gt;
letters.)&lt;br /&gt;
&lt;br /&gt;
====Transpose====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''transpose'' &amp;lt;!-- \index{transpose} --&amp;gt; of a matrix is the same set of&lt;br /&gt;
elements, but now the first row becomes the first column, the second row&lt;br /&gt;
becomes the second column, and so on.  Thus the rows and columns are&lt;br /&gt;
interchanged.  For example, for a square &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix, the&lt;br /&gt;
transpose is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A^T &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{12} &amp;amp; a_{13} \\&lt;br /&gt;
        a_{21} &amp;amp; a_{22} &amp;amp; a_{23} \\&lt;br /&gt;
        a_{31} &amp;amp; a_{32} &amp;amp; a_{33} \end{array}\right)^T \\&lt;br /&gt;
    &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{21} &amp;amp; a_{31} \\&lt;br /&gt;
        a_{12} &amp;amp; a_{22} &amp;amp; a_{32} \\&lt;br /&gt;
        a_{13} &amp;amp; a_{23} &amp;amp; a_{33} \end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.3}}&lt;br /&gt;
&lt;br /&gt;
====Hermitian Conjugate====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The complex conjugate and transpose of a matrix is called the ''Hermitian conjugate'', or simply the ''dagger'' of a matrix.  It is called the dagger because the symbol used to denote it,&lt;br /&gt;
(&amp;lt;math&amp;gt;\dagger\,\!&amp;lt;/math&amp;gt;):&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(A^T)^* = (A^*)^T \equiv A^\dagger.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.4}}&lt;br /&gt;
For our &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; example, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A^\dagger = \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11}^* &amp;amp; a_{21}^* &amp;amp; a_{31}^* \\&lt;br /&gt;
        a_{12}^* &amp;amp; a_{22}^* &amp;amp; a_{32}^* \\&lt;br /&gt;
        a_{13}^* &amp;amp; a_{23}^* &amp;amp; a_{33}^* \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If a matrix is its own Hermitian conjugate, i.e. &amp;lt;math&amp;gt;A^\dagger = A\,\!&amp;lt;/math&amp;gt;, then&lt;br /&gt;
we call it a ''Hermitian matrix''.  &amp;lt;!-- \index{Hermitian matrix}--&amp;gt;&lt;br /&gt;
(Clearly this is only possible for square matrices.) Hermitian&lt;br /&gt;
matrices are very important in quantum mechanics since their&lt;br /&gt;
eigenvalues are real.  (See Sec.([[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|Eigenvalues and Eigenvectors]]).)&lt;br /&gt;
&lt;br /&gt;
====Index Notation====&lt;br /&gt;
&lt;br /&gt;
Very often we write the product of two matrices &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; simply as&lt;br /&gt;
&amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;C=AB\,\!&amp;lt;/math&amp;gt;.  However, it is also quite useful to write this&lt;br /&gt;
in component form.  In this case, if these are &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrices, the component form will be &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ik} = \sum_{j=1}^n a_{ij}b_{jk}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This says that the element in the &amp;lt;math&amp;gt;i^{\mbox{th}}\,\!&amp;lt;/math&amp;gt; row and&lt;br /&gt;
&amp;lt;math&amp;gt;j^{\mbox{th}}\,\!&amp;lt;/math&amp;gt; column of the matrix &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the sum &amp;lt;math&amp;gt;\sum_{j=1}^n&lt;br /&gt;
a_{ij}b_{jk}\,\!&amp;lt;/math&amp;gt;.  The transpose of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; has elements&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ki} = \sum_{j=1}^n a_{kj}b_{ji}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now if we were to transpose &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; as well, this would read&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ki} = \sum_{j=1}^n (a_{jk})^T (b_{ij})^T = \sum_{j=1}^n b^T_{ij} a^T_{jk}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This gives us a way of seeing the general rule that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
C^T = B^TA^T.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It follows that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
C^\dagger = B^\dagger A^\dagger.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Trace====&lt;br /&gt;
&lt;br /&gt;
The ''trace'' &amp;lt;!-- \index{trace}--&amp;gt; of a matrix is the sum of the diagonal&lt;br /&gt;
elements and is denoted &amp;lt;math&amp;gt;\mbox{Tr}\,\!&amp;lt;/math&amp;gt;.  So for example, the trace of an&lt;br /&gt;
&amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(A) = \sum_{i=1}^n a_{ii}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some useful properties of the trace are the following:&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(AB) = \mbox{Tr}(BA)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(A + B) = \mbox{Tr}(A) + \mbox{Tr}(B)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Using the first of these results,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(UAU^{-1}) = \mbox{Tr}(U^{-1}UA) = \mbox{Tr}(A).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This relation is used so often that we state it here explicitly.&lt;br /&gt;
&lt;br /&gt;
====The Determinant====&lt;br /&gt;
&lt;br /&gt;
For a square matrix, the determinant is quite a useful thing.  For&lt;br /&gt;
example, an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix is invertible if and only if its&lt;br /&gt;
determinant is not zero.  So let us define the determinant and give&lt;br /&gt;
some properties and examples.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''determinant''&amp;lt;!--\index{determinant}--&amp;gt; of a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrix, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
N = \left(\begin{array}{cc}&lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d \end{array}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.5}}&lt;br /&gt;
is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(N) = ad-bc.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.6}}&lt;br /&gt;
Higher-order determinants can be written in terms of smaller ones in&lt;br /&gt;
the standard way.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''determinant''&amp;lt;!-- \index{determinant}--&amp;gt; of a matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be&lt;br /&gt;
also be written in terms of its components as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = \sum_{i,j,k,l,...} \epsilon_{ijkl...}&lt;br /&gt;
a_{1i}a_{2j}a_{3k}a_{4l} ...,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.7}}&lt;br /&gt;
where the symbol &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{ijkl...} = \begin{cases}&lt;br /&gt;
                       +1, \; \mbox{if } \; ijkl... = 1234... (\mbox{in order, or any even number of permutations}),\\&lt;br /&gt;
                       -1, \; \mbox{if } \; ijkl... = 2134... (\mbox{or any odd number of permutations}),\\&lt;br /&gt;
                       \;\;\; 0, \; \mbox{otherwise}, \; (\mbox{meaning any index is repeated}).&lt;br /&gt;
                      \end{cases}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.8}}&lt;br /&gt;
&lt;br /&gt;
Let us consider the example of the &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; given&lt;br /&gt;
above.  The determinant can be calculated by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = \sum_{i,j,k} \epsilon_{ijk}&lt;br /&gt;
a_{1i}a_{2j}a_{3k},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where, explicitly, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{ijk} = \begin{cases}&lt;br /&gt;
                       +1, \;\mbox{if }\; ijk= 123,231,\; \mbox{or}\; 312, (\mbox{These are even permutations of }123),\\&lt;br /&gt;
                       -1, \;\mbox{if }\; ijk = 213,132,\;\mbox{or}\;321(\mbox{These are odd permuations of }123),\\&lt;br /&gt;
                    \;\;\;  0, \; \mbox{otherwise}, \; (\mbox{meaning any index is repeated}),&lt;br /&gt;
                 \end{cases}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.9}}&lt;br /&gt;
so that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\det(A) &amp;amp;=&amp;amp; \epsilon_{123}a_{11}a_{22}a_{33} &lt;br /&gt;
         +\epsilon_{132}a_{11}a_{23}a_{32}&lt;br /&gt;
         +\epsilon_{231}a_{12}a_{23}a_{31}  \\&lt;br /&gt;
       &amp;amp;&amp;amp;+\epsilon_{213}a_{12}a_{21}a_{33}&lt;br /&gt;
         +\epsilon_{312}a_{13}a_{21}a_{32}&lt;br /&gt;
         +\epsilon_{213}a_{13}a_{21}a_{32}.&lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.10}}&lt;br /&gt;
Now given the values of &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; in [[#eqC.9|Eq. C.9]],&lt;br /&gt;
this is&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = a_{11}a_{22}a_{33} - a_{11}a_{23}a_{32} + a_{12}a_{23}a_{31} &lt;br /&gt;
         - a_{12}a_{21}a_{33} + a_{13}a_{21}a_{32} - a_{13}a_{21}a_{32}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The determinant has several properties that are useful to know.  A few are listed here:  &lt;br /&gt;
#The determinant of the transpose of a matrix is the same as the determinant of the matrix itself: &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(A) = \det(A^T).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
#The determinant of a product is the product of determinants:    &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(AB) = \det(A)\det(B).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
From this last property, another specific property can be derived.&lt;br /&gt;
If we take the determinant of the product of a matrix and its&lt;br /&gt;
inverse, we find&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(U U^{-1}) = \det(U)\det(U^{-1}) = \det(\mathbb{I}) = 1,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
since the determinant of the identity is one.  This implies that&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(U^{-1}) = \frac{1}{\det(U)}.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Inverse of a Matrix====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The inverse &amp;lt;!-- \index{inverse}--&amp;gt; of a square matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is another matrix,&lt;br /&gt;
denoted &amp;lt;math&amp;gt;A^{-1}\,\!&amp;lt;/math&amp;gt;, such that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
AA^{-1} = A^{-1}A = \mathbb{I},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; is the identity matrix consisting of zeroes everywhere&lt;br /&gt;
except the diagonal, which has ones.  For example, the &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}_3 = \left(\begin{array}{ccc} 1 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; 1 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is important to note that ''a matrix is invertible if and only if its determinant is nonzero.''  Thus one only needs to calculate the&lt;br /&gt;
determinant to see if a matrix has an inverse or not.&lt;br /&gt;
&lt;br /&gt;
====Hermitian Matrices====&lt;br /&gt;
&lt;br /&gt;
Hermitian matrices are important for a variety of reasons; primarily, it is because their eigenvalues are real.  Thus Hermitian matrices are used to represent density operators and density matrices, as well as Hamiltonians.  The density operator is a positive semi-definite Hermitian matrix (it has no negative eigenvalues) that has its trace equal to one.  In any case, it is often desirable to represent &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices using a real linear combination of a complete set of &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices.  A set of &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices is complete if any Hermitian matrix can be represented in terms of the set.  Let &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt; be a complete set.  Then any Hermitian matrix can be represented by &amp;lt;math&amp;gt;\sum_i a_i \lambda_i\,\!&amp;lt;/math&amp;gt;.  The set can always be taken to be a set of traceless Hermitian matrices and the identity matrix.  This is convenient for the density matrix (its trace is one) because the identity part of an &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrix is &amp;lt;math&amp;gt;(1/N)\mathbb{I}\,\!&amp;lt;/math&amp;gt; if we take all others in the set to be traceless.  For the Hamiltonian, the set consists of a traceless part and an identity part where identity part just gives an overall phase which can often be neglected.  &lt;br /&gt;
&lt;br /&gt;
One example of such a set which is extremely useful is the set of Pauli matrices.  These are discussed in detail in [[Chapter 2 - Qubits and Collections of Qubits|Chapter 2]] and in particular in [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|Section 2.4]].&lt;br /&gt;
&lt;br /&gt;
====Unitary Matrices====&lt;br /&gt;
&lt;br /&gt;
A ''unitary matrix'' &amp;lt;!-- \index{unitary matrix} --&amp;gt; &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is one whose&lt;br /&gt;
inverse is also its Hermitian conjugate, &amp;lt;math&amp;gt;U^\dagger = U^{-1}\,\!&amp;lt;/math&amp;gt;, so that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
U^\dagger U = UU^\dagger = \mathbb{I}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If the unitary matrix also has determinant one, it is said to be ''a special unitary matrix''.&amp;lt;!-- \index{special unitary matrix}--&amp;gt;  The set of&lt;br /&gt;
&amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; unitary matrices is denoted&lt;br /&gt;
&amp;lt;math&amp;gt;U(n)\,\!&amp;lt;/math&amp;gt; and the set of special unitary matrices is denoted &amp;lt;math&amp;gt;SU(n)\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Unitary matrices are particularly important in quantum mechanics&lt;br /&gt;
because they describe the evolution of quantum states.&lt;br /&gt;
They have this ability due to the fact that the rows and columns of unitary matrices (viewed as vectors) are orthonormal. (This is made clear in an example below.)  This means that when&lt;br /&gt;
they act on a basis vector of the form&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \left\vert j\right\rangle = &lt;br /&gt;
 \left(\begin{array}{c} 0 \\ 0 \\ \vdots \\ 1 \\ \vdots \\ 0 &lt;br /&gt;
  \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.11}}&lt;br /&gt;
with a single 1, in say the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th spot, and zeroes everywhere else, the result is a normalized complex vector.  Acting on a set of&lt;br /&gt;
orthonormal vectors of the form given in Eq.[[#eqC.11|(C.11)]]&lt;br /&gt;
will produce another orthonormal set.  &lt;br /&gt;
&lt;br /&gt;
Let us consider the example of a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U = \left(\begin{array}{cc} &lt;br /&gt;
              a &amp;amp; b \\ &lt;br /&gt;
              c &amp;amp; d &lt;br /&gt;
           \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.12}}&lt;br /&gt;
The inverse of this matrix is the Hermitian conjugate, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U ^{-1} = U^\dagger = \left(\begin{array}{cc} &lt;br /&gt;
                         a^* &amp;amp; c^* \\ &lt;br /&gt;
                         b^* &amp;amp; d^* &lt;br /&gt;
                       \end{array}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.13}}&lt;br /&gt;
provided that the matrix &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; satisfies the constraints&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|a|^2 + |b|^2 = 1, \; &amp;amp; \; ac^*+bd^* =0  \\&lt;br /&gt;
ca^*+db^*=0,  \;      &amp;amp;  \; |c|^2 + |d|^2 =1,&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|C.14}}&lt;br /&gt;
and&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|a|^2 + |c|^2 = 1, \; &amp;amp; \; ba^*+dc^* =0  \\&lt;br /&gt;
b^*a+d^*c=0,  \;      &amp;amp;  \; |b|^2 + |d|^2 =1.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|C.15}}&lt;br /&gt;
Looking at each row as a vector, the constraints in&lt;br /&gt;
Eq.[[#eqC.14|(C.14)]] are the orthonormality conditions for the&lt;br /&gt;
vectors forming the rows.  Similarly, the constraints in&lt;br /&gt;
Eq.[[#eqC.15|(C.15)]] are the orthonormality conditions for the&lt;br /&gt;
vectors forming the columns.&lt;br /&gt;
&lt;br /&gt;
===More Dirac Notation===&lt;br /&gt;
&lt;br /&gt;
Now that we have a definition for the Hermitian conjugate, we consider the&lt;br /&gt;
case for a &amp;lt;math&amp;gt;1\times n\,\!&amp;lt;/math&amp;gt; matrix, i.e. a vector.  In Dirac notation, this is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi\right\rangle = \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The Hermitian conjugate comes up so often that we use the following&lt;br /&gt;
notation for vectors:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle \psi\right\vert = (\left\vert\psi\right\rangle)^\dagger = \left(\begin{array}{c} \alpha \\&lt;br /&gt;
    \beta \end{array}\right)^\dagger &lt;br /&gt;
 = \left( \alpha^*, \; \beta^* \right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a row vector and in Dirac notation is denoted by the symbol &amp;lt;math&amp;gt;\left\langle\cdot \right\vert\!&amp;lt;/math&amp;gt;, which is called a ''bra''.  Let us consider a second complex vector, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\phi\right\rangle = \left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The ''inner product'' &amp;lt;!-- \index{inner product}--&amp;gt; between &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; and &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert\phi\right\rangle\,\!&amp;lt;/math&amp;gt; is computed as follows:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \left\langle\phi\mid\psi\right\rangle &amp;amp; \equiv (\left\vert\phi\right\rangle)^\dagger\left\vert\psi \right\rangle   \\&lt;br /&gt;
                  &amp;amp;= (\gamma^*,\delta^*) \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                  &amp;amp;= \gamma^*\alpha + \delta^*\beta.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.16}}&lt;br /&gt;
If these two vectors are ''orthogonal'', &amp;lt;!-- \index{orthogonal!vectors} --&amp;gt;&lt;br /&gt;
then their inner product is zero, or &amp;lt;math&amp;gt;\left\langle\phi\mid\psi\right\rangle =0\,\!&amp;lt;/math&amp;gt;.  (The &amp;lt;math&amp;gt; \left\langle\phi\mid\psi\right\rangle \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
is called a ''bracket'', which is the product of the ''bra'' and the ''ket''.)  The inner product of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; with itself is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\psi\mid\psi\right\rangle = |\alpha|^2 + |\beta|^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This vector is considered normalized when &amp;lt;math&amp;gt;\left\langle\psi\mid\psi\right\rangle = 1\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
More generally, we will consider vectors in &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; dimensions.  In this&lt;br /&gt;
case we write the vector in terms of a set of basis vectors,&lt;br /&gt;
&amp;lt;math&amp;gt;\{\left\vert i\right\rangle\}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;i = 0,1,2,...N-1\,\!&amp;lt;/math&amp;gt;.  This is an ordered set of&lt;br /&gt;
vectors which are labeled simply by integers.  If the set is orthogonal,&lt;br /&gt;
then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid j\right\rangle = 0, \;\; \mbox{for all }i\neq j.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If they are normalized, then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i \mid i \right\rangle = 1, \;\;\mbox{for all } i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If both of these are true, i.e. the entire set is orthonormal, we can&lt;br /&gt;
write&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid j\right\rangle = \delta_{ij}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where the symbol &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; is called the Kronecker delta &amp;lt;!-- \index{Kronecker delta} --&amp;gt; and is defined by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\delta_{ij} = \begin{cases}&lt;br /&gt;
               1, &amp;amp; \mbox{if } i=j, \\&lt;br /&gt;
               0, &amp;amp; \mbox{if } i\neq j.&lt;br /&gt;
              \end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.17}}&lt;br /&gt;
Now consider &amp;lt;math&amp;gt;(N+1)\,\!&amp;lt;/math&amp;gt;-dimensional vectors by letting two such vectors&lt;br /&gt;
be expressed in the same basis as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \Psi\right\rangle = \sum_{i=0}^{N} \alpha_i\left\vert i\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\Phi\right\rangle = \sum_{j=0}^{N} \beta_j\left\vert j\right\rangle.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then the inner product &amp;lt;!--\index{inner product}--&amp;gt; is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\langle\Psi\mid\Phi\right\rangle &amp;amp;= \left(\sum_{i=0}^{N}&lt;br /&gt;
             \alpha_i\left\vert i\right\rangle\right)^\dagger\left(\sum_{j=0}^{N} \beta_j\left\vert j\right\rangle\right)  \\&lt;br /&gt;
                 &amp;amp;= \sum_{ij} \alpha_i^*\beta_j\left\langle i\mid j\right\rangle  \\&lt;br /&gt;
                 &amp;amp;= \sum_{ij} \alpha_i^*\beta_j\delta_{ij}  \\&lt;br /&gt;
                 &amp;amp;= \sum_i\alpha^*_i\beta_i,&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.18}}&lt;br /&gt;
where the fact that the delta function is zero unless&lt;br /&gt;
&amp;lt;math&amp;gt;i=j\,\!&amp;lt;/math&amp;gt; is used to obtain the last equality.  Taking the inner product of a vector&lt;br /&gt;
with itself will get&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\Psi\mid\Psi\right\rangle = \sum_i\alpha^*_i\alpha_i = \sum_i|\alpha_i|^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This immediately gives us a very important property of the inner&lt;br /&gt;
product.  It tells us that, in general,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\Phi\mid\Phi\right\rangle \geq 0, \;\; \mbox{and} \;\; \left\langle\Phi\mid \Phi\right\rangle = 0&lt;br /&gt;
\Leftrightarrow \left\vert\Phi\right\rangle = 0. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
(The symbol &amp;lt;math&amp;gt;\Leftrightarrow\,\!&amp;lt;/math&amp;gt; means &amp;lt;nowiki&amp;gt;&amp;quot;if and only if,&amp;quot;&amp;lt;/nowiki&amp;gt; sometimes written as &amp;lt;nowiki&amp;gt;&amp;quot;iff.&amp;quot;&amp;lt;/nowiki&amp;gt;)  &lt;br /&gt;
&lt;br /&gt;
We could also expand a vector in a different basis.  Let us suppose&lt;br /&gt;
that the set &amp;lt;math&amp;gt;\{\left\vert e_k \right\rangle\}\,\!&amp;lt;/math&amp;gt; is an orthonormal basis &amp;lt;math&amp;gt;(\left\langle e_k \mid e_l\right\rangle =&lt;br /&gt;
\delta_{kl})\,\!&amp;lt;/math&amp;gt; that is different from the one considered earlier.  We&lt;br /&gt;
could expand our vector &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt; in terms of our new basis by&lt;br /&gt;
expanding our new basis in terms of our old basis.  Let us first&lt;br /&gt;
expand the &amp;lt;math&amp;gt;\left\vert e_k\right\rangle\,\!&amp;lt;/math&amp;gt; in terms of the &amp;lt;math&amp;gt;\left\vert j\right\rangle\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert e_k\right\rangle= \sum_j \left\vert j\right\rangle \left\langle j\mid e_k\right\rangle,&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.19}}&lt;br /&gt;
so that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert \Psi\right\rangle &amp;amp;= \sum_j \alpha_j\left\vert j\right\rangle  \\&lt;br /&gt;
           &amp;amp;= \sum_{j}\sum_k\alpha_j\left\vert e_k \right\rangle \left\langle e_k \mid j\right\rangle  \\ &lt;br /&gt;
           &amp;amp;= \sum_k \alpha_k^\prime \left\vert e_k\right\rangle, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.20}}&lt;br /&gt;
where &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\alpha_k^\prime = \sum_j \alpha_j \left\langle e_k \mid j\right\rangle. &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.21}}&lt;br /&gt;
Notice that the insertion of &amp;lt;math&amp;gt;\sum_k\left\vert e_k\right\rangle\left\langle e_k\right\vert\,\!&amp;lt;/math&amp;gt; didn't do anything to our original vector; it is the same vector, just in a&lt;br /&gt;
different basis.  Therefore, this is effectively the identity operator,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I} = \sum_k\left\vert e_k \right\rangle\left\langle e_k\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is an important and quite useful relation.  &lt;br /&gt;
To interpret Eq.[[#eqC.19|(C.19)]], we can draw a close&lt;br /&gt;
analogy with three-dimensional real vectors.  The inner product&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert e_k \right\rangle \left\langle j \right\vert\,\!&amp;lt;/math&amp;gt; can be interpreted as the projection of one vector onto&lt;br /&gt;
another.  This provides the part of &amp;lt;math&amp;gt;\left\vert j \right\rangle\,\!&amp;lt;/math&amp;gt; along &amp;lt;math&amp;gt;\left\vert e_k \right\rangle\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Transformations===&lt;br /&gt;
&lt;br /&gt;
Suppose we have two different orthogonal bases, &amp;lt;math&amp;gt;\{e_k\}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{j\}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The numbers &amp;lt;math&amp;gt;\left\langle e_k\mid j\right\rangle\,\!&amp;lt;/math&amp;gt; for all the different &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; are&lt;br /&gt;
often referred to as matrix elements since the set forms a matrix, with&lt;br /&gt;
&amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; labelling the rows and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; labelling the columns.  Thus we&lt;br /&gt;
can represent the transformation from one basis to another with a matrix&lt;br /&gt;
transformation.  Let &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; be the matrix with elements &amp;lt;math&amp;gt;m_{kj} =&lt;br /&gt;
\left\langle e_k\mid j\right\rangle\,\!&amp;lt;/math&amp;gt;.  The transformation from one basis to another,&lt;br /&gt;
written in terms of the coefficients of &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; A^\prime = MA, &amp;lt;/math&amp;gt;|C.22}}&lt;br /&gt;
where &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A^\prime = \left(\begin{array}{c} \alpha_1^\prime \\ \alpha_2^\prime \\ \vdots \\&lt;br /&gt;
    \alpha_n^\prime \end{array}\right), \;\; &lt;br /&gt;
\mbox{ and } \;\;&lt;br /&gt;
A = \left(\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\&lt;br /&gt;
    \alpha_n\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This sort of transformation is a change of basis.  Oftentimes when one vector is transformed to another the transformation can be viewed as a transformation of the components of the vector and is also represented by a matrix.  Thus transformations can either be&lt;br /&gt;
represented by the matrix equation, like Eq.[[#eqC.22|(C.22)]], or the components, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\alpha_k^\prime = \sum_j \alpha_j \left\langle e_k \mid j \right\rangle = \sum_j m_{kj}\alpha_j. &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.23}}&lt;br /&gt;
In the case that we consider a matrix transformation of basis elements, we call it a passive transformation.  (The transformation does nothing to the object, but only changes the basis in which the object is described.)  An active transformation is one where the object itself is transformed.  Often these two transformations, active and passive, are very simply related.  However, the distinction can be very important.  &lt;br /&gt;
&lt;br /&gt;
For a general transformation matrix &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; acting on a vector,&lt;br /&gt;
the matrix elements in a particular basis &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
t_{ij} = \left\langle i\right\vert (T) \left\vert j\right\rangle, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
just as elements of a vector can be found using&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid \Psi \right\rangle = \alpha_i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Transformations of a Qubit====&lt;br /&gt;
&lt;br /&gt;
It is worth belaboring the point somewhat and presenting several ways in which to parametrize the set of transformations of a qubit.  A qubit state is represented by a complex two-dimensional vector that has been normalized to one:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \psi\right\rangle = \alpha_0 \left\vert 0 \right\rangle + \alpha_1 \left\vert 1\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1 \end{array}\right), \;\;\;\; |\alpha_0|^2 + |\alpha_1|^2 = 1.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The most general matrix transformation that will take this to any other state of the same form (complex, 2-d vector with unit norm) is a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix.  In [[Chapter 2 - Qubits and Collections of Qubits|Chapter 2]], several specific examples of qubit transformations were given; in [[Chapter 3 - Physics of Quantum Information|Chapter 3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|Section 3.4]] it was stated that an element of SU(2) can be written as (see [[Chapter 3 - Physics of Quantum Information#Exponentian of a Matrix|Section 3.2.1, Exponentiation of a Matrix]])&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
U(\theta) &amp;amp;= \exp(-i\vec{n}\cdot\vec{\sigma} \theta/2) \\&lt;br /&gt;
          &amp;amp;= (\mathbb{I}\cos(\theta/2) -i\vec{n}\cdot\vec{\sigma} \sin(\theta/2))&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.24}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; is a unit vector, &amp;lt;math&amp;gt;|\vec{n}|=1\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\vec{n}\cdot\vec{\sigma} =&lt;br /&gt;
n_1\sigma_1+n_2\sigma_2+n_3\sigma_3\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
Explicitly, this is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \exp(-i\vec{n}\cdot\vec{\sigma} \theta/2) &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; 1 \end{array}\right)\cos(\theta/2) \\&lt;br /&gt;
                        &amp;amp; \;\;\;   + (-i)\left[ n_1\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; 1 \\ &lt;br /&gt;
                                  1 &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_2\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; -i \\ &lt;br /&gt;
                                  i &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_3\left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; -1 \end{array}\right)\right]\sin(\theta/2) \\&lt;br /&gt;
                                &amp;amp;= &lt;br /&gt;
         \left(\begin{array}{cc}&lt;br /&gt;
  \cos(\theta/2) -in_3\sin(\theta/2) &amp;amp; (-in_1-n_2)\sin(\theta/2) \\ &lt;br /&gt;
   (-in_1+n_2)\sin(\theta/2) &amp;amp; \cos(\theta/2) +in_3\sin(\theta/2)  \end{array}\right).&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Notice that this is a ''special unitary matrix.''  (See Section [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Unitary Matrices]].)&lt;br /&gt;
To see that this is the most general SU(2) matrix, one needs to verify that any complex &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix can be written in this form.  (One way to do this is to start with a generic matrix and impose the restrictions.  Here one may simply convince oneself that this is general through observation by acting on basis vectors.)  This is the most general qubit transformation and can be interpreted as a rotation about the axis &amp;lt;math&amp;gt;\hat{n}\,\!&amp;lt;/math&amp;gt; by an angle &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Another parametrization of this set of matrices is the following, called the Euler angle parametrization:&lt;br /&gt;
 {{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U_{EA}   = \exp(-i\sigma_z \alpha/2) \exp(-i\sigma_y \beta/2) \exp(-i\sigma_z \gamma/2).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.25}}&lt;br /&gt;
In this case the matrices &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; are not unique.  Any two of the three Pauli matrices (or one of each) may be chosen.  This is quite simple, useful, and generalizable to SU(N) for N arbitrary.  In the simple case of a qubit, one may convince oneself by acting on basis vectors as before.  However, with a little thought, one may see that rotating to a position on the sphere by the first angle, followed by rotations using the other two, will provide for a general orientation of an object.&lt;br /&gt;
&lt;br /&gt;
====Similarity Transformation====&lt;br /&gt;
&lt;br /&gt;
A ''similarity transformation'' &amp;lt;!--\index{similarity transformation}--&amp;gt; &lt;br /&gt;
of an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; by an invertible matrix &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;S A S^{-1}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
There are (at least) two important things to note about similarity&lt;br /&gt;
transformations: &lt;br /&gt;
#Similarity transformations leave the trace of a matrix unchanged.  This is shown explicitly in [[#The Trace|Section 3.5]].&lt;br /&gt;
#Similarity transformations leave the determinant of a matrix unchanged, or invariant.  This is because &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(SAS^{-1}) = \det(S)\det(A)\det(S^{-1}) =\det(S)\det(A)\frac{1}{\det(S)} = \det(A). \,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
#Simultaneous similarity transformations of matrices in an equation will leave the equation unchanged.  Let &amp;lt;math&amp;gt;A^\prime = SAS^{-1}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B^\prime = SBS^{-1}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;C^\prime = SCS^{-1}\,\!&amp;lt;/math&amp;gt;.  If &amp;lt;math&amp;gt;AB=C\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;A^\prime B^\prime = C^\prime\,\!&amp;lt;/math&amp;gt;, since &amp;lt;math&amp;gt;A^\prime B^\prime = SAS^{-1}SBS^{-1} = SABS^{-1} =  SCS^{-1}=C^\prime\,\!&amp;lt;/math&amp;gt;.  The two matrices &amp;lt;math&amp;gt;A^\prime\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; are said to be ''similar''.&lt;br /&gt;
&amp;lt;!-- \index{similar matrices} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Eigenvalues and Eigenvectors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- \index{eigenvalues}\index{eigenvectors} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A matrix can always be diagonalized.  By this, it is meant that for&lt;br /&gt;
every complex matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; there is a diagonal matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; such that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
M = UDV,  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.26}}&lt;br /&gt;
where &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; are unitary matrices.  This form is called a singular value decomposition of the matrix and the entries of the diagonal matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; are called the ''singular values'' &amp;lt;!--\index{singular values}--&amp;gt; &lt;br /&gt;
of the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;.  However, the singular values are not always easy to find.  &lt;br /&gt;
&lt;br /&gt;
For the special case that the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is Hermitian &amp;lt;math&amp;gt;(M^\dagger = M)\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; can be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
M = U D U^\dagger&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.27}}&lt;br /&gt;
where &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is unitary &amp;lt;math&amp;gt;(U^{-1}=U^\dagger)\,\!&amp;lt;/math&amp;gt;.  In this case the elements&lt;br /&gt;
of the matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; are called ''eigenvalues''. &amp;lt;!--\index{eigenvalues}--&amp;gt;&lt;br /&gt;
Very often eigenvalues are introduced as solutions to the equation&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M \left\vert v\right\rangle = \lambda \left\vert v\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\left\vert v\right\rangle\,\!&amp;lt;/math&amp;gt; is an ''eigenvector''. &amp;lt;!--\index{eigenvector} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To find the eigenvalues and eigenvectors of a matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;, we follow a&lt;br /&gt;
standard procedure which is to calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\lambda\mathbb{I} - M) = 0&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.28}}&lt;br /&gt;
and then solve for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  The different solutions for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt; is the&lt;br /&gt;
set of eigenvalues and is called the ''spectrum''. &amp;lt;!-- \index{spectrum}--&amp;gt; Let the different eigenvalues be denoted by &amp;lt;math&amp;gt;\lambda_i\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i=1,2,...,n\,\!&amp;lt;/math&amp;gt; fo an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; vector.  If two&lt;br /&gt;
eigenvalues are equal, we say the spectrum is &lt;br /&gt;
''degenerate''. &amp;lt;!--\index{degenerate}--&amp;gt; To find the&lt;br /&gt;
eigenvectors, which correspond to different eigenvalues, the equation &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M \left\vert v\right\rangle = \lambda_i \left\vert v\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
must be solved for each value of &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;.  Notice that this equation&lt;br /&gt;
holds even if we multiply both sides by some complex number.  This&lt;br /&gt;
implies that an eigenvector can always be scaled.  Usually they are&lt;br /&gt;
normalized to obtain an orthonormal set.  As we will see by example,&lt;br /&gt;
degenerate eigenvalues require some care.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
&lt;br /&gt;
Consider a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\sigma = \left(\begin{array}{cc} &lt;br /&gt;
               1+a &amp;amp; b-ic \\&lt;br /&gt;
              b+ic &amp;amp; 1-a  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.29}}&lt;br /&gt;
To find the eigenvalues &amp;lt;!--\index{eigenvalues}--&amp;gt; &lt;br /&gt;
of this, we follow a standard procedure, which&lt;br /&gt;
is to calculate &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\sigma-\lambda\mathbb{I}) = 0,&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.30}}&lt;br /&gt;
and solve for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  The eigenvalues of this matrix are given by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{cc} &lt;br /&gt;
               1+a-\lambda &amp;amp; b-ic \\&lt;br /&gt;
              b+ic &amp;amp; 1-a-\lambda  \end{array}\right) =0,  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which implies that the eigenvalues are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_{\pm} = 1\pm \sqrt{a^2+b^2+c^2}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and the eigenvectors &amp;lt;!--\index{eigenvectors}--&amp;gt; are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_1=\left(\begin{array}{c}&lt;br /&gt;
        i\left(-a + c + \sqrt{a^2 + 4 b^2 - 2 ac + c^2} \right) \\ &lt;br /&gt;
        2b &lt;br /&gt;
        \end{array}\right), &lt;br /&gt;
v_2= \left(\begin{array}{c}&lt;br /&gt;
         i\left(-a + c - \sqrt {a^2 + 4 b^2 - 2 a c + c^2} \right)\\ &lt;br /&gt;
         2b \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
These expressions are useful for calculating properties of qubit&lt;br /&gt;
states as will be seen in the text.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
&lt;br /&gt;
Now consider a &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N= \left(\begin{array}{ccc}&lt;br /&gt;
              1 &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0 &amp;amp; 1 \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
First we calculate&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{ccc}&lt;br /&gt;
              1-\lambda &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i         &amp;amp; 1-\lambda  &amp;amp; 0 \\&lt;br /&gt;
              0         &amp;amp;       0    &amp;amp; 1-\lambda &lt;br /&gt;
           \end{array}\right) &lt;br /&gt;
    = (1-\lambda)[(1-\lambda)^2-1].&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This implies that the eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']] are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda = 1,0, \mbox{ or } 2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda_1=1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\lambda_0 = 0\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\lambda_2 = 2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
To find eigenvectors, &amp;lt;!--\index{eigenvalues}--&amp;gt; we calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Nv &amp;amp;= \lambda v, \\&lt;br /&gt;
\left(\begin{array}{ccc}&lt;br /&gt;
              1 &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0 &amp;amp; 1 \end{array}\right)\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right) &amp;amp;= \lambda\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.31}}&lt;br /&gt;
for each &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
For &amp;lt;math&amp;gt;\lambda = 1\,\!&amp;lt;/math&amp;gt;, we get the following equations:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1 -iv_2 &amp;amp;= v_1, \\&lt;br /&gt;
iv_1+v_2 &amp;amp;= v_2,  \\&lt;br /&gt;
v_3 &amp;amp;= v_3. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.32}}&lt;br /&gt;
Solving this obtains &amp;lt;math&amp;gt;v_2 =0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;v_1 =0\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;v_3\,\!&amp;lt;/math&amp;gt; is any non-zero number (which will be chosen to normalize the vector).  For &amp;lt;math&amp;gt;\lambda&lt;br /&gt;
=0\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1  &amp;amp;= iv_2, \\&lt;br /&gt;
v_3 &amp;amp;= 0. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.33}}&lt;br /&gt;
And finally, for &amp;lt;math&amp;gt;\lambda = 2\,\!&amp;lt;/math&amp;gt;, we obtain&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1 -iv_2 &amp;amp;= 2v_1, \\&lt;br /&gt;
iv_1+v_2 &amp;amp;= 2v_2, \\&lt;br /&gt;
v_3 &amp;amp;= 2v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.34}}&lt;br /&gt;
so that &amp;lt;math&amp;gt;v_1 = -iv_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
Therefore, our three eigenvectors &amp;lt;!--\index{eigenvalues}--&amp;gt; are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_0 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} i \\ 1\\ 0 \end{array}\right), \; &lt;br /&gt;
v_1 = \left(\begin{array}{c} 0 \\ 0\\ 1 \end{array}\right), \; &lt;br /&gt;
v_2 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} -i \\ 1\\ 0 \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The matrix &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
V= (v_0,v_1,v_2) = \left(\begin{array}{ccc}&lt;br /&gt;
              i/\sqrt{2} &amp;amp; 0     &amp;amp; -i/\sqrt{2} \\&lt;br /&gt;
              1/\sqrt{2} &amp;amp; 0     &amp;amp; 1/\sqrt{2} \\&lt;br /&gt;
              0          &amp;amp; 1     &amp;amp; 0 \end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
is the matrix that diagonalizes &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; in the following way:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N = VDV^\dagger&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
D = \left(\begin{array}{ccc}&lt;br /&gt;
              0 &amp;amp; 0  &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 1  &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0  &amp;amp; 2 \end{array}\right)&lt;br /&gt;
.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We may write this as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
V^\dagger N V = D.   &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is sometimes called the ''eigenvalue decompostion''&amp;lt;!--\index{eigenvalue decomposition}--&amp;gt;  of the matrix and can also be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
N = \sum_i \lambda_i v_iv^\dagger_i.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.35}}&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
&lt;br /&gt;
Next, consider the complex &amp;lt;math&amp;gt;3\times 3&amp;lt;/math&amp;gt; Hermitian matrix &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M = \left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2} &amp;amp; 0  &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0 &amp;amp; 2  &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2} &amp;amp; 0  &amp;amp; \frac{5}{2} \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
First we calculate&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2}-\lambda &amp;amp; 0 &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0         &amp;amp; 2-\lambda  &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2}         &amp;amp;       0    &amp;amp; \frac{5}{2}-\lambda &lt;br /&gt;
           \end{array}\right) &lt;br /&gt;
    = (2-\lambda)\left[\left(\frac{5}{2}-\lambda\right)^2-\frac{1}{4}\right].&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This implies that the eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']] are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda = 2,2, \mbox{ or } 3.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Note that there are two that are the same, or degenerate.  &lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda_1=2\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\lambda_2 = 2\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\lambda_3 = 3\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
To find eigenvectors, &amp;lt;!--\index{eigenvalues}--&amp;gt; we calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Mv &amp;amp;= \lambda v, \\&lt;br /&gt;
\left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2} &amp;amp; 0 &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0 &amp;amp; 2 &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2} &amp;amp; 0 &amp;amp; \frac{5}{2} \end{array}\right)\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right) &amp;amp;= \lambda\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.36}}&lt;br /&gt;
for each &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
For &amp;lt;math&amp;gt;\lambda = 3\,\!&amp;lt;/math&amp;gt;, we get the following equations:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{5}{2}v_1 + \frac{i}{2}v_3 &amp;amp;= 3v_1, \\&lt;br /&gt;
2v_2 &amp;amp;= 3v_2,  \\&lt;br /&gt;
-\frac{i}{2}v_1 + \frac{5}{2}v_3 &amp;amp;= 3v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.37}}&lt;br /&gt;
so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
iv_3  &amp;amp;= v_1, \\&lt;br /&gt;
v_2 &amp;amp;= 0. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.38}}&lt;br /&gt;
Now for &amp;lt;math&amp;gt;\lambda = 2\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{5}{2}v_1 +\frac{i}{2}v_3 &amp;amp;= 2 v_1, \\&lt;br /&gt;
2v_2 &amp;amp;= 2v_2, \\&lt;br /&gt;
-\frac{i}{2}v_1 + \frac{5}{2}v_3 &amp;amp;= 2v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.39}}&lt;br /&gt;
so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_3  &amp;amp;= iv_1, \\&lt;br /&gt;
v_2 &amp;amp;= \mbox{anything}. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.40}}&lt;br /&gt;
We would like to have a set of orthonormal vectors.  (We can always choose the set to be orthonormal.)  We choose the three eigenvectors to be&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_2 = \left(\begin{array}{c} 1 \\ a \\ i&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
v_2^\prime = \left(\begin{array}{c} 1 \\ a^\prime \\ i&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
v_3 = \left(\begin{array}{c} i \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We set the inner product of the two vectors &amp;lt;math&amp;gt; v_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_2^\prime \,\!&amp;lt;/math&amp;gt; equal to zero so as to have then be orthogonal: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
1 + a a^\prime +1 = 2 + a a^\prime = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now we can choose &amp;lt;math&amp;gt; a = \sqrt{2}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; a^\prime = -\sqrt{2}\,\!&amp;lt;/math&amp;gt; so that the normalized eigenvectors are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_2 = \frac{1}{2}\left(\begin{array}{c} 1 \\ \sqrt{2} \\ i&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
v_2^\prime = \frac{1}{2}\left(\begin{array}{c} 1 \\ -\sqrt{2} \\ i&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
v_3 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} i \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Tensor Products===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tensor product, &amp;lt;!--\index{tensor product} --&amp;gt;&lt;br /&gt;
or the Kronecker product, &amp;lt;!--\index{Kronecker product}--&amp;gt;&lt;br /&gt;
is used extensively in quantum mechanics and&lt;br /&gt;
throughout the course.  It is commonly denoted with a &amp;lt;math&amp;gt;\otimes\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
symbol, although this is often left out.  In fact, the following&lt;br /&gt;
are commonly found in the literature as notation for the tensor&lt;br /&gt;
product of two vectors &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert\Phi\right\rangle\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\Psi\right\rangle\otimes\left\vert\Phi\right\rangle &amp;amp;= \left\vert\Psi\right\rangle\left\vert\Phi\right\rangle  \\&lt;br /&gt;
                         &amp;amp;= \left\vert\Psi\Phi\right\rangle.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.41}}&lt;br /&gt;
Each of these has its advantages and will all be used in&lt;br /&gt;
different circumstances in this text.  &lt;br /&gt;
&lt;br /&gt;
The tensor product is also often used for operators.  Several&lt;br /&gt;
examples &lt;br /&gt;
will be given, one that explicitly calculates the tensor product for&lt;br /&gt;
two vectors and one that calculates it for two matrices which could&lt;br /&gt;
represent operators.  However, these are not different in the sense&lt;br /&gt;
that a vector is a &amp;lt;math&amp;gt;1\times n\,\!&amp;lt;/math&amp;gt; or an &amp;lt;math&amp;gt;n\times 1\,\!&amp;lt;/math&amp;gt; matrix.  It is also&lt;br /&gt;
noteworthy that the two objects in the tensor product need not be of&lt;br /&gt;
the same type.  In general, a tensor product of an &amp;lt;math&amp;gt;n\times m\,\!&amp;lt;/math&amp;gt; object&lt;br /&gt;
(array) with a &amp;lt;math&amp;gt;p\times q\,\!&amp;lt;/math&amp;gt; object will produce an &amp;lt;math&amp;gt;np\times mq\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
object.  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The tensor product of two objects is computed as follows.&lt;br /&gt;
Let &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; be an &amp;lt;math&amp;gt;n\times m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; be a &amp;lt;math&amp;gt;p\times q\,\!&amp;lt;/math&amp;gt; array, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
A = \left(\begin{array}{cccc} &lt;br /&gt;
           a_{11} &amp;amp; a_{12} &amp;amp; \cdots &amp;amp; a_{1m} \\&lt;br /&gt;
           a_{21} &amp;amp; a_{22} &amp;amp; \cdots &amp;amp; a_{2m} \\&lt;br /&gt;
           \vdots &amp;amp;        &amp;amp; \ddots &amp;amp;      \\&lt;br /&gt;
           a_{n1} &amp;amp; a_{n2} &amp;amp; \cdots &amp;amp; a_{nm} \end{array}\right),&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.42}}&lt;br /&gt;
and similarly for &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
A\otimes B = \left(\begin{array}{cccc} &lt;br /&gt;
             a_{11}B &amp;amp; a_{12}B &amp;amp; \cdots &amp;amp; a_{1m}B \\&lt;br /&gt;
             a_{21}B &amp;amp; a_{22}B &amp;amp; \cdots &amp;amp; a_{2m}B \\&lt;br /&gt;
             \vdots  &amp;amp;         &amp;amp; \ddots &amp;amp;      \\&lt;br /&gt;
             a_{n1}B &amp;amp; a_{n2}B &amp;amp; \cdots &amp;amp; a_{nm}B \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.43}}&lt;br /&gt;
&lt;br /&gt;
Let us now consider two examples.  First let &amp;lt;math&amp;gt;\left\vert\phi\right\rangle\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; be as before,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi\right\rangle = \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) \;\; &lt;br /&gt;
\mbox{and} \;\; &lt;br /&gt;
\left\vert\phi\right\rangle = \left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\psi\right\rangle\otimes\left\vert\phi\right\rangle &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) &lt;br /&gt;
\otimes &lt;br /&gt;
\left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{c} \alpha\gamma\\ &lt;br /&gt;
                                                     \alpha\delta \\&lt;br /&gt;
                                                     \beta\gamma \\ &lt;br /&gt;
                                                     \beta\delta &lt;br /&gt;
                                       \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.44}}&lt;br /&gt;
Also&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\psi\right\rangle\otimes\left\langle\phi\right\vert &amp;amp;= \left\vert\psi\right\rangle\left\langle\phi\right\vert \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) &lt;br /&gt;
\otimes &lt;br /&gt;
\left(\begin{array}{cc} \gamma^* &amp;amp; \delta^*&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                \alpha\gamma^* &amp;amp; \alpha\delta^* \\&lt;br /&gt;
                                \beta\gamma^*  &amp;amp; \beta\delta^* &lt;br /&gt;
                                       \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.45}}&lt;br /&gt;
&lt;br /&gt;
Now consider two matrices&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A = \left(\begin{array}{cc} &lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d  \end{array}\right) &lt;br /&gt;
 \;\; \mbox{and} \;\;&lt;br /&gt;
B = \left(\begin{array}{cc} &lt;br /&gt;
               e &amp;amp; f \\&lt;br /&gt;
               g &amp;amp; h  \end{array}\right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A\otimes B &amp;amp;=  \left(\begin{array}{cc} &lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d  \end{array}\right) &lt;br /&gt;
 \otimes&lt;br /&gt;
               \left(\begin{array}{cc} &lt;br /&gt;
                 e &amp;amp; f \\&lt;br /&gt;
                 g &amp;amp; h  \end{array}\right)   \\  &lt;br /&gt;
           &amp;amp;=  \left(\begin{array}{cccc} &lt;br /&gt;
                 ae &amp;amp; af &amp;amp; be &amp;amp; bf \\&lt;br /&gt;
                 ag &amp;amp; ah &amp;amp; bg &amp;amp; bh \\&lt;br /&gt;
                 ce &amp;amp; cf &amp;amp; de &amp;amp; df \\&lt;br /&gt;
                 cg &amp;amp; ch &amp;amp; dg &amp;amp; dh \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.46}}&lt;br /&gt;
&lt;br /&gt;
====Properties of Tensor Products====&lt;br /&gt;
&lt;br /&gt;
Listed here are properties of tensor products that are useful, with &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; of any type:&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)(C\otimes D) = AC \otimes BD\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)^T = A^T\otimes B^T\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)^* = A^*\otimes B^*\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)\otimes C = A\otimes(B\otimes C)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A+B) \otimes C = A\otimes C+B\otimes C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;A\otimes(B+C) = A\otimes B + A\otimes C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(A\otimes B) = \mbox{Tr}(A)\mbox{Tr}(B)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
(See [[Bibliography#HornNJohnsonII|Horn and Johnson, Topics in Matrix Analysis]], Chapter 4.)&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_C_-_Vectors_and_Linear_Algebra&amp;diff=1788</id>
		<title>Appendix C - Vectors and Linear Algebra</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Appendix_C_-_Vectors_and_Linear_Algebra&amp;diff=1788"/>
		<updated>2012-01-05T08:52:24Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Index Notation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
This appendix introduces some aspects of linear algebra and complex&lt;br /&gt;
algebra that will be helpful for the course.  In addition, Dirac&lt;br /&gt;
notation is introduced and explained.&lt;br /&gt;
&lt;br /&gt;
===Vectors===&lt;br /&gt;
&lt;br /&gt;
Here we review some facts about real vectors before discussing the representation and complex analogues used in quantum mechanics.  &lt;br /&gt;
&lt;br /&gt;
====Real Vectors====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The simple definition of a vector --- an object that has magnitude and&lt;br /&gt;
direction --- is helpful to keep in mind even when dealing with complex&lt;br /&gt;
and/or abstract vectors as we will here.  In three dimensional space,&lt;br /&gt;
a vector is often written as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = v_x\hat{x} + v_y \hat{y} + v_z\hat{z},&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where the hat (&amp;lt;math&amp;gt;\hat{\cdot}\,\!&amp;lt;/math&amp;gt;) denotes a unit vector and the components&lt;br /&gt;
&amp;lt;math&amp;gt;v_i\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i = x,y,z\,\!&amp;lt;/math&amp;gt; are just numbers.  The unit vectors are also&lt;br /&gt;
known as ''basis'' vectors. &amp;lt;!-- \index{basis vectors!real} --&amp;gt; &lt;br /&gt;
This is because any vector&lt;br /&gt;
in real three-dimensional space can be written in terms of these unit/basis vectors.  In&lt;br /&gt;
some sense they are the basic components of any vector.  Other basis&lt;br /&gt;
vectors could be used, however, such as in spherical and cylindrical coordinates.  When dealing with more abstract and/or complex vectors,&lt;br /&gt;
it is often helpful to ask what one would do for an ordinary&lt;br /&gt;
three-dimensional vector.  For example, properties of unit vectors,&lt;br /&gt;
dot products, etc. in three-dimensions are similar to the analogous&lt;br /&gt;
constructions in higher dimensions.  &lt;br /&gt;
&lt;br /&gt;
The ''inner product'',&amp;lt;!-- \index{inner product}--&amp;gt; or ''dot product'',&amp;lt;!--\index{dot product}--&amp;gt; for two real three-dimensional vectors,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = v_x\hat{x} + v_y \hat{y} + v_z\hat{z}, \;\; &lt;br /&gt;
\vec{w} = w_x\hat{x} + w_y \hat{y} + w_z\hat{z},&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
can be computed as follows:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v}\cdot\vec{w} = v_xw_x + v_yw_y + v_zw_z.&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
For the inner product of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt; with itself, we get the square of&lt;br /&gt;
the magnitude of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt;, denoted &amp;lt;math&amp;gt;|\vec{v}|^2\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
|\vec{v}|^2 = \vec{v}\cdot\vec{v} = v_xv_x + v_yv_y +&lt;br /&gt;
v_zv_z=v_x^2+v_y^2+v_z^2. &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If we want a unit vector in the direction of &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt;, we can simply divide it&lt;br /&gt;
by its magnitude:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{v} = \frac{\vec{v}}{|\vec{v}|}.  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now, of course, &amp;lt;math&amp;gt;\hat{v}\cdot\hat{v}= 1\,\!&amp;lt;/math&amp;gt;, which can easily be checked.  &lt;br /&gt;
&lt;br /&gt;
There are several ways to represent a vector.  The ones we will use&lt;br /&gt;
most often are column and row vector notations.  So, for example, we&lt;br /&gt;
could write the vector above as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\vec{v} = \left(\begin{array}{c} v_x \\ v_y \\ v_z&lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
In this case, our unit vectors are represented by the following: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\hat{x} = \left(\begin{array}{c} 1 \\ 0 \\ 0&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
\hat{y} = \left(\begin{array}{c} 0 \\ 1 \\ 0&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
\hat{z} = \left(\begin{array}{c} 0 \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We next turn to the subject of complex vectors and the relevant&lt;br /&gt;
notation. &lt;br /&gt;
We will see how to compute the inner product later, since some other&lt;br /&gt;
definitions are required.&lt;br /&gt;
&lt;br /&gt;
====Complex Vectors====&lt;br /&gt;
&lt;br /&gt;
For complex vectors in quantum mechanics, Dirac notation is used most often.  This notation uses a &amp;lt;math&amp;gt;\left\vert \cdot \right\rangle\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
called a ''ket'', for a vector.  So our vector &amp;lt;math&amp;gt;\vec{v}\,\!&amp;lt;/math&amp;gt; would be&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert v \right\rangle  = \left(\begin{array}{c} v_x \\ v_y \\ v_z&lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For qubits, i.e. two-state quantum systems, complex vectors will often be used:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align} \left\vert \psi \right\rangle &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta &lt;br /&gt;
  \end{array}\right) \\&lt;br /&gt;
           &amp;amp;=\alpha \left\vert 0\right\rangle + \beta\left\vert 1\right\rangle,\end{align}&amp;lt;/math&amp;gt;|C.1}} &lt;br /&gt;
where&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert 0\right\rangle = \left(\begin{array}{c} 1 \\ 0 &lt;br /&gt;
  \end{array}\right), \;\;\mbox{and} \;\;&lt;br /&gt;
\left\vert 1\right\rangle = \left(\begin{array}{c} 0 \\ 1 &lt;br /&gt;
  \end{array}\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
are the basis vectors.  The two numbers &amp;lt;math&amp;gt;\alpha\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\beta\,\!&amp;lt;/math&amp;gt; are complex numbers, so the vector is said to&lt;br /&gt;
be a complex vector.&lt;br /&gt;
&lt;br /&gt;
====Inner Product====&lt;br /&gt;
&lt;br /&gt;
Now let us suppose we have another complex vector,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \phi \right\rangle  = \left(\begin{array}{c} \gamma \\ \delta &lt;br /&gt;
  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The ''inner product'' between two vectors is often written as &amp;lt;math&amp;gt;\left\langle \phi \vert \psi \right\rangle \;\! &amp;lt;/math&amp;gt;, which is the same as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align} (\left\vert \phi \right\rangle )^\dagger\left\vert \psi \right\rangle &lt;br /&gt;
&amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right)^\dagger&lt;br /&gt;
\left(\begin{array}{c} \alpha \\ \beta   \end{array}\right) \\&lt;br /&gt;
           &amp;amp;= \left(\begin{array}{cc} \gamma^* &amp;amp; \delta^* \end{array}\right) \left(\begin{array}{c} \alpha \\ \beta   \end{array}\right) \\  &lt;br /&gt;
           &amp;amp;= \gamma^*\alpha + \delta^*\beta \end{align} \;\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Outer Product====&lt;br /&gt;
&lt;br /&gt;
The ''outer product'' between these same two vectors is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt; &lt;br /&gt;
\begin{align} (\left\vert \phi \right\rangle )(\left\vert \psi \right\rangle)^\dagger &lt;br /&gt;
 &amp;amp;=  \left\vert \phi \right\rangle \left\langle \psi \right\vert \\&lt;br /&gt;
&amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right)&lt;br /&gt;
\left(\begin{array}{c} \alpha \\ \beta   \end{array}\right)^\dagger \\&lt;br /&gt;
           &amp;amp;= \left(\begin{array}{c} \gamma \\ \delta \end{array}\right) \left(\begin{array}{cc} \alpha^* &amp;amp; \beta^*   \end{array}\right) \\  &lt;br /&gt;
           &amp;amp;=   \left(\begin{array}{cc} \gamma\alpha^* &amp;amp; \gamma\beta^* \\  \delta\alpha^* &amp;amp; \delta\beta^*  \end{array}\right) \end{align}\;\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Linear Algebra: Matrices===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
There are many aspects of linear algebra that are quite useful in&lt;br /&gt;
quantum mechanics.  We will briefly discuss several of these aspects here.&lt;br /&gt;
First, some definitions and properties are provided that will&lt;br /&gt;
be useful.  Some familiarity with matrices&lt;br /&gt;
will be assumed, although many basic definitions are also included.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Let us denote some &amp;lt;math&amp;gt;m\times n\,\!&amp;lt;/math&amp;gt; matrix by &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;.  The set of all &amp;lt;math&amp;gt;m\times&lt;br /&gt;
n\,\!&amp;lt;/math&amp;gt; matrices with real entries is &amp;lt;math&amp;gt;M(n\times m,\mathbb{R})\,\!&amp;lt;/math&amp;gt;.  Such matrices&lt;br /&gt;
are said to be real since they have all real entries.  Similarly, the&lt;br /&gt;
set of &amp;lt;math&amp;gt;m\times n\,\!&amp;lt;/math&amp;gt; complex matrices is &amp;lt;math&amp;gt;M(m\times n,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.  For the&lt;br /&gt;
set of set of square &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; complex matrices, we simply write&lt;br /&gt;
&amp;lt;math&amp;gt;M(n,\mathbb{C})\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We will also refer to the set of matrix elements, &amp;lt;math&amp;gt;a_{ij}\,\!&amp;lt;/math&amp;gt;, where the&lt;br /&gt;
first index (&amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt; in this case) labels the row and the second &amp;lt;math&amp;gt;(j)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
labels the column.  Thus the element &amp;lt;math&amp;gt;a_{23}\,\!&amp;lt;/math&amp;gt; is the element in the&lt;br /&gt;
second row and third column.  A comma is inserted if there is some&lt;br /&gt;
ambiguity.  For example, in a large matrix the element in the&lt;br /&gt;
2nd row and 12th&lt;br /&gt;
column is written as &amp;lt;math&amp;gt;a_{2,12}\,\!&amp;lt;/math&amp;gt; to distinguish between the&lt;br /&gt;
21st row and 2nd column.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Complex Conjugate====&lt;br /&gt;
&lt;br /&gt;
The ''complex conjugate of a matrix'' &amp;lt;!-- \index{complex conjugate!of a matrix}--&amp;gt;&lt;br /&gt;
is the matrix with each element replaced by its complex conjugate.  In&lt;br /&gt;
other words, to take the complex conjugate of a matrix, one takes the&lt;br /&gt;
complex conjugate of each entry in the matrix.  We denote the complex&lt;br /&gt;
conjugate with a star, like this: &amp;lt;math&amp;gt;A^*\,\!&amp;lt;/math&amp;gt;.  For example,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A^* &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{12} &amp;amp; a_{13} \\&lt;br /&gt;
        a_{21} &amp;amp; a_{22} &amp;amp; a_{23} \\&lt;br /&gt;
        a_{31} &amp;amp; a_{32} &amp;amp; a_{33} \end{array}\right)^*  \\&lt;br /&gt;
    &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11}^* &amp;amp; a_{12}^* &amp;amp; a_{13}^* \\&lt;br /&gt;
        a_{21}^* &amp;amp; a_{22}^* &amp;amp; a_{23}^* \\&lt;br /&gt;
        a_{31}^* &amp;amp; a_{32}^* &amp;amp; a_{33}^* \end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.2}}&lt;br /&gt;
(Notice that the notation for a matrix is a capital letter, whereas&lt;br /&gt;
the entries are represented by lower case&lt;br /&gt;
letters.)&lt;br /&gt;
&lt;br /&gt;
====Transpose====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''transpose'' &amp;lt;!-- \index{transpose} --&amp;gt; of a matrix is the same set of&lt;br /&gt;
elements, but now the first row becomes the first column, the second row&lt;br /&gt;
becomes the second column, and so on.  Thus the rows and columns are&lt;br /&gt;
interchanged.  For example, for a square &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix, the&lt;br /&gt;
transpose is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A^T &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{12} &amp;amp; a_{13} \\&lt;br /&gt;
        a_{21} &amp;amp; a_{22} &amp;amp; a_{23} \\&lt;br /&gt;
        a_{31} &amp;amp; a_{32} &amp;amp; a_{33} \end{array}\right)^T \\&lt;br /&gt;
    &amp;amp;=&amp;amp; \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11} &amp;amp; a_{21} &amp;amp; a_{31} \\&lt;br /&gt;
        a_{12} &amp;amp; a_{22} &amp;amp; a_{32} \\&lt;br /&gt;
        a_{13} &amp;amp; a_{23} &amp;amp; a_{33} \end{array}\right). &lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.3}}&lt;br /&gt;
&lt;br /&gt;
====Hermitian Conjugate====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The complex conjugate and transpose of a matrix is called the ''Hermitian conjugate'', or simply the ''dagger'' of a matrix.  It is called the dagger because the symbol used to denote it,&lt;br /&gt;
(&amp;lt;math&amp;gt;\dagger\,\!&amp;lt;/math&amp;gt;):&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
(A^T)^* = (A^*)^T \equiv A^\dagger.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.4}}&lt;br /&gt;
For our &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; example, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A^\dagger = \left(\begin{array}{ccc}&lt;br /&gt;
        a_{11}^* &amp;amp; a_{21}^* &amp;amp; a_{31}^* \\&lt;br /&gt;
        a_{12}^* &amp;amp; a_{22}^* &amp;amp; a_{32}^* \\&lt;br /&gt;
        a_{13}^* &amp;amp; a_{23}^* &amp;amp; a_{33}^* \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If a matrix is its own Hermitian conjugate, i.e. &amp;lt;math&amp;gt;A^\dagger = A\,\!&amp;lt;/math&amp;gt;, then&lt;br /&gt;
we call it a ''Hermitian matrix''.  &amp;lt;!-- \index{Hermitian matrix}--&amp;gt;&lt;br /&gt;
(Clearly this is only possible for square matrices.) Hermitian&lt;br /&gt;
matrices are very important in quantum mechanics since their&lt;br /&gt;
eigenvalues are real.  (See Sec.([[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|Eigenvalues and Eigenvectors]]).)&lt;br /&gt;
&lt;br /&gt;
====Index Notation====&lt;br /&gt;
&lt;br /&gt;
Very often we write the product of two matrices &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; simply as&lt;br /&gt;
&amp;lt;math&amp;gt;AB\,\!&amp;lt;/math&amp;gt; and let &amp;lt;math&amp;gt;C=AB\,\!&amp;lt;/math&amp;gt;.  However, it is also quite useful to write this&lt;br /&gt;
in component form.  In this case, if these are &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrices, the component form will be &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ik} = \sum_{j=1}^n a_{ij}b_{jk}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This says that the element in the &amp;lt;math&amp;gt;i^{\mbox{th}}\,\!&amp;lt;/math&amp;gt; row and&lt;br /&gt;
&amp;lt;math&amp;gt;j^{\mbox{th}}\,\!&amp;lt;/math&amp;gt; column of the matrix &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; is the sum &amp;lt;math&amp;gt;\sum_1^n&lt;br /&gt;
a_{ij}b_{jk}\,\!&amp;lt;/math&amp;gt;.  The transpose of &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt; has elements&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ki} = \sum_{j=1}^n a_{kj}b_{ji}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now if we were to transpose &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; as well, this would read&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
c_{ki} = \sum_{j=1}^n (a_{jk})^T (b_{ij})^T = \sum_{j=1}^n b^T_{ij} a^T_{jk}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This gives us a way of seeing the general rule that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
C^T = B^TA^T.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
It follows that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
C^\dagger = B^\dagger A^\dagger.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Trace====&lt;br /&gt;
&lt;br /&gt;
The ''trace'' &amp;lt;!-- \index{trace}--&amp;gt; of a matrix is the sum of the diagonal&lt;br /&gt;
elements and is denoted &amp;lt;math&amp;gt;\mbox{Tr}\,\!&amp;lt;/math&amp;gt;.  So for example, the trace of an&lt;br /&gt;
&amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(A) = \sum_{i=1}^n a_{ii}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some useful properties of the trace are the following:&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(AB) = \mbox{Tr}(BA)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(A + B) = \mbox{Tr}(A) + \mbox{Tr}(B)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Using the first of these results,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mbox{Tr}(UAU^{-1}) = \mbox{Tr}(U^{-1}UA) = \mbox{Tr}(A).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This relation is used so often that we state it here explicitly.&lt;br /&gt;
&lt;br /&gt;
====The Determinant====&lt;br /&gt;
&lt;br /&gt;
For a square matrix, the determinant is quite a useful thing.  For&lt;br /&gt;
example, an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix is invertible if and only if its&lt;br /&gt;
determinant is not zero.  So let us define the determinant and give&lt;br /&gt;
some properties and examples.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''determinant''&amp;lt;!--\index{determinant}--&amp;gt; of a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; matrix, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
N = \left(\begin{array}{cc}&lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d \end{array}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.5}}&lt;br /&gt;
is given by&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(N) = ad-bc.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.6}}&lt;br /&gt;
Higher-order determinants can be written in terms of smaller ones in&lt;br /&gt;
the standard way.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''determinant''&amp;lt;!-- \index{determinant}--&amp;gt; of a matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; can be&lt;br /&gt;
also be written in terms of its components as &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = \sum_{i,j,k,l,...} \epsilon_{ijkl...}&lt;br /&gt;
a_{1i}a_{2j}a_{3k}a_{4l} ...,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.7}}&lt;br /&gt;
where the symbol &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{ijkl...} = \begin{cases}&lt;br /&gt;
                       +1, \; \mbox{if } \; ijkl... = 1234... (\mbox{in order, or any even number of permutations}),\\&lt;br /&gt;
                       -1, \; \mbox{if } \; ijkl... = 2134... (\mbox{or any odd number of permutations}),\\&lt;br /&gt;
                       \;\;\; 0, \; \mbox{otherwise}, \; (\mbox{meaning any index is repeated}).&lt;br /&gt;
                      \end{cases}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.8}}&lt;br /&gt;
&lt;br /&gt;
Let us consider the example of the &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; given&lt;br /&gt;
above.  The determinant can be calculated by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = \sum_{i,j,k} \epsilon_{ijk}&lt;br /&gt;
a_{1i}a_{2j}a_{3k},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where, explicitly, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\epsilon_{ijk} = \begin{cases}&lt;br /&gt;
                       +1, \;\mbox{if }\; ijk= 123,231,\; \mbox{or}\; 312, (\mbox{These are even permutations of }123),\\&lt;br /&gt;
                       -1, \;\mbox{if }\; ijk = 213,132,\;\mbox{or}\;321(\mbox{These are odd permuations of }123),\\&lt;br /&gt;
                    \;\;\;  0, \; \mbox{otherwise}, \; (\mbox{meaning any index is repeated}),&lt;br /&gt;
                 \end{cases}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.9}}&lt;br /&gt;
so that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\det(A) &amp;amp;=&amp;amp; \epsilon_{123}a_{11}a_{22}a_{33} &lt;br /&gt;
         +\epsilon_{132}a_{11}a_{23}a_{32}&lt;br /&gt;
         +\epsilon_{231}a_{12}a_{23}a_{31}  \\&lt;br /&gt;
       &amp;amp;&amp;amp;+\epsilon_{213}a_{12}a_{21}a_{33}&lt;br /&gt;
         +\epsilon_{312}a_{13}a_{21}a_{32}&lt;br /&gt;
         +\epsilon_{213}a_{13}a_{21}a_{32}.&lt;br /&gt;
\end{align}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.10}}&lt;br /&gt;
Now given the values of &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; in [[#eqC.9|Eq. C.9]],&lt;br /&gt;
this is&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(A) = a_{11}a_{22}a_{33} - a_{11}a_{23}a_{32} + a_{12}a_{23}a_{31} &lt;br /&gt;
         - a_{12}a_{21}a_{33} + a_{13}a_{21}a_{32} - a_{13}a_{21}a_{32}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The determinant has several properties that are useful to know.  A few are listed here:  &lt;br /&gt;
#The determinant of the transpose of a matrix is the same as the determinant of the matrix itself: &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(A) = \det(A^T).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
#The determinant of a product is the product of determinants:    &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(AB) = \det(A)\det(B).\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
From this last property, another specific property can be derived.&lt;br /&gt;
If we take the determinant of the product of a matrix and its&lt;br /&gt;
inverse, we find&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(U U^{-1}) = \det(U)\det(U^{-1}) = \det(\mathbb{I}) = 1,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
since the determinant of the identity is one.  This implies that&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det(U^{-1}) = \frac{1}{\det(U)}.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====The Inverse of a Matrix====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The inverse &amp;lt;!-- \index{inverse}--&amp;gt; of a square matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; is another matrix,&lt;br /&gt;
denoted &amp;lt;math&amp;gt;A^{-1}\,\!&amp;lt;/math&amp;gt;, such that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
AA^{-1} = A^{-1}A = \mathbb{I},&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\mathbb{I}\,\!&amp;lt;/math&amp;gt; is the identity matrix consisting of zeroes everywhere&lt;br /&gt;
except the diagonal, which has ones.  For example, the &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I}_3 = \left(\begin{array}{ccc} 1 &amp;amp; 0 &amp;amp; 0 \\ 0 &amp;amp; 1 &amp;amp; 0 \\ 0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is important to note that ''a matrix is invertible if and only if its determinant is nonzero.''  Thus one only needs to calculate the&lt;br /&gt;
determinant to see if a matrix has an inverse or not.&lt;br /&gt;
&lt;br /&gt;
====Hermitian Matrices====&lt;br /&gt;
&lt;br /&gt;
Hermitian matrices are important for a variety of reasons; primarily, it is because their eigenvalues are real.  Thus Hermitian matrices are used to represent density operators and density matrices, as well as Hamiltonians.  The density operator is a positive semi-definite Hermitian matrix (it has no negative eigenvalues) that has its trace equal to one.  In any case, it is often desirable to represent &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices using a real linear combination of a complete set of &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices.  A set of &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrices is complete if any Hermitian matrix can be represented in terms of the set.  Let &amp;lt;math&amp;gt;\{\lambda_i\}\,\!&amp;lt;/math&amp;gt; be a complete set.  Then any Hermitian matrix can be represented by &amp;lt;math&amp;gt;\sum_i a_i \lambda_i\,\!&amp;lt;/math&amp;gt;.  The set can always be taken to be a set of traceless Hermitian matrices and the identity matrix.  This is convenient for the density matrix (its trace is one) because the identity part of an &amp;lt;math&amp;gt;N\times N\,\!&amp;lt;/math&amp;gt; Hermitian matrix is &amp;lt;math&amp;gt;(1/N)\mathbb{I}\,\!&amp;lt;/math&amp;gt; if we take all others in the set to be traceless.  For the Hamiltonian, the set consists of a traceless part and an identity part where identity part just gives an overall phase which can often be neglected.  &lt;br /&gt;
&lt;br /&gt;
One example of such a set which is extremely useful is the set of Pauli matrices.  These are discussed in detail in [[Chapter 2 - Qubits and Collections of Qubits|Chapter 2]] and in particular in [[Chapter 2 - Qubits and Collections of Qubits#The Pauli Matrices|Section 2.4]].&lt;br /&gt;
&lt;br /&gt;
====Unitary Matrices====&lt;br /&gt;
&lt;br /&gt;
A ''unitary matrix'' &amp;lt;!-- \index{unitary matrix} --&amp;gt; &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is one whose&lt;br /&gt;
inverse is also its Hermitian conjugate, &amp;lt;math&amp;gt;U^\dagger = U^{-1}\,\!&amp;lt;/math&amp;gt;, so that &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
U^\dagger U = UU^\dagger = \mathbb{I}.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If the unitary matrix also has determinant one, it is said to be ''a special unitary matrix''.&amp;lt;!-- \index{special unitary matrix}--&amp;gt;  The set of&lt;br /&gt;
&amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; unitary matrices is denoted&lt;br /&gt;
&amp;lt;math&amp;gt;U(n)\,\!&amp;lt;/math&amp;gt; and the set of special unitary matrices is denoted &amp;lt;math&amp;gt;SU(n)\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Unitary matrices are particularly important in quantum mechanics&lt;br /&gt;
because they describe the evolution of quantum states.&lt;br /&gt;
They have this ability due to the fact that the rows and columns of unitary matrices (viewed as vectors) are orthonormal. (This is made clear in an example below.)  This means that when&lt;br /&gt;
they act on a basis vector of the form&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; \left\vert j\right\rangle = &lt;br /&gt;
 \left(\begin{array}{c} 0 \\ 0 \\ \vdots \\ 1 \\ \vdots \\ 0 &lt;br /&gt;
  \end{array}\right), &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.11}}&lt;br /&gt;
with a single 1, in say the &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;th spot, and zeroes everywhere else, the result is a normalized complex vector.  Acting on a set of&lt;br /&gt;
orthonormal vectors of the form given in Eq.[[#eqC.11|(C.11)]]&lt;br /&gt;
will produce another orthonormal set.  &lt;br /&gt;
&lt;br /&gt;
Let us consider the example of a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U = \left(\begin{array}{cc} &lt;br /&gt;
              a &amp;amp; b \\ &lt;br /&gt;
              c &amp;amp; d &lt;br /&gt;
           \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.12}}&lt;br /&gt;
The inverse of this matrix is the Hermitian conjugate, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U ^{-1} = U^\dagger = \left(\begin{array}{cc} &lt;br /&gt;
                         a^* &amp;amp; c^* \\ &lt;br /&gt;
                         b^* &amp;amp; d^* &lt;br /&gt;
                       \end{array}\right),&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.13}}&lt;br /&gt;
provided that the matrix &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; satisfies the constraints&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|a|^2 + |b|^2 = 1, \; &amp;amp; \; ac^*+bd^* =0  \\&lt;br /&gt;
ca^*+db^*=0,  \;      &amp;amp;  \; |c|^2 + |d|^2 =1,&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|C.14}}&lt;br /&gt;
and&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
|a|^2 + |c|^2 = 1, \; &amp;amp; \; ba^*+dc^* =0  \\&lt;br /&gt;
b^*a+d^*c=0,  \;      &amp;amp;  \; |b|^2 + |d|^2 =1.&lt;br /&gt;
\end{align}\,\!&amp;lt;/math&amp;gt;|C.15}}&lt;br /&gt;
Looking at each row as a vector, the constraints in&lt;br /&gt;
Eq.[[#eqC.14|(C.14)]] are the orthonormality conditions for the&lt;br /&gt;
vectors forming the rows.  Similarly, the constraints in&lt;br /&gt;
Eq.[[#eqC.15|(C.15)]] are the orthonormality conditions for the&lt;br /&gt;
vectors forming the columns.&lt;br /&gt;
&lt;br /&gt;
===More Dirac Notation===&lt;br /&gt;
&lt;br /&gt;
Now that we have a definition for the Hermitian conjugate, we consider the&lt;br /&gt;
case for a &amp;lt;math&amp;gt;1\times n\,\!&amp;lt;/math&amp;gt; matrix, i.e. a vector.  In Dirac notation, this is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi\right\rangle = \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The Hermitian conjugate comes up so often that we use the following&lt;br /&gt;
notation for vectors:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle \psi\right\vert = (\left\vert\psi\right\rangle)^\dagger = \left(\begin{array}{c} \alpha \\&lt;br /&gt;
    \beta \end{array}\right)^\dagger &lt;br /&gt;
 = \left( \alpha^*, \; \beta^* \right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is a row vector and in Dirac notation is denoted by the symbol &amp;lt;math&amp;gt;\left\langle\cdot \right\vert\!&amp;lt;/math&amp;gt;, which is called a ''bra''.  Let us consider a second complex vector, &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\phi\right\rangle = \left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The ''inner product'' &amp;lt;!-- \index{inner product}--&amp;gt; between &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; and &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert\phi\right\rangle\,\!&amp;lt;/math&amp;gt; is computed as follows:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \left\langle\phi\mid\psi\right\rangle &amp;amp; \equiv (\left\vert\phi\right\rangle)^\dagger\left\vert\psi \right\rangle   \\&lt;br /&gt;
                  &amp;amp;= (\gamma^*,\delta^*) \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                  &amp;amp;= \gamma^*\alpha + \delta^*\beta.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.16}}&lt;br /&gt;
If these two vectors are ''orthogonal'', &amp;lt;!-- \index{orthogonal!vectors} --&amp;gt;&lt;br /&gt;
then their inner product is zero, or &amp;lt;math&amp;gt;\left\langle\phi\mid\psi\right\rangle =0\,\!&amp;lt;/math&amp;gt;.  (The &amp;lt;math&amp;gt; \left\langle\phi\mid\psi\right\rangle \,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
is called a ''bracket'', which is the product of the ''bra'' and the ''ket''.)  The inner product of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; with itself is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\psi\mid\psi\right\rangle = |\alpha|^2 + |\beta|^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This vector is considered normalized when &amp;lt;math&amp;gt;\left\langle\psi\mid\psi\right\rangle = 1\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
More generally, we will consider vectors in &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; dimensions.  In this&lt;br /&gt;
case we write the vector in terms of a set of basis vectors,&lt;br /&gt;
&amp;lt;math&amp;gt;\{\left\vert i\right\rangle\}\,\!&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;i = 0,1,2,...N-1\,\!&amp;lt;/math&amp;gt;.  This is an ordered set of&lt;br /&gt;
vectors which are labeled simply by integers.  If the set is orthogonal,&lt;br /&gt;
then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid j\right\rangle = 0, \;\; \mbox{for all }i\neq j.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If they are normalized, then &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i \mid i \right\rangle = 1, \;\;\mbox{for all } i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
If both of these are true, i.e. the entire set is orthonormal, we can&lt;br /&gt;
write&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid j\right\rangle = \delta_{ij}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where the symbol &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; is called the Kronecker delta &amp;lt;!-- \index{Kronecker delta} --&amp;gt; and is defined by &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\delta_{ij} = \begin{cases}&lt;br /&gt;
               1, &amp;amp; \mbox{if } i=j, \\&lt;br /&gt;
               0, &amp;amp; \mbox{if } i\neq j.&lt;br /&gt;
              \end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.17}}&lt;br /&gt;
Now consider &amp;lt;math&amp;gt;(N+1)\,\!&amp;lt;/math&amp;gt;-dimensional vectors by letting two such vectors&lt;br /&gt;
be expressed in the same basis as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \Psi\right\rangle = \sum_{i=0}^{N} \alpha_i\left\vert i\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\Phi\right\rangle = \sum_{j=0}^{N} \beta_j\left\vert j\right\rangle.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then the inner product &amp;lt;!--\index{inner product}--&amp;gt; is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\langle\Psi\mid\Phi\right\rangle &amp;amp;= \left(\sum_{i=0}^{N}&lt;br /&gt;
             \alpha_i\left\vert i\right\rangle\right)^\dagger\left(\sum_{j=0}^{N} \beta_j\left\vert j\right\rangle\right)  \\&lt;br /&gt;
                 &amp;amp;= \sum_{ij} \alpha_i^*\beta_j\left\langle i\mid j\right\rangle  \\&lt;br /&gt;
                 &amp;amp;= \sum_{ij} \alpha_i^*\beta_j\delta_{ij}  \\&lt;br /&gt;
                 &amp;amp;= \sum_i\alpha^*_i\beta_i,&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.18}}&lt;br /&gt;
where the fact that the delta function is zero unless&lt;br /&gt;
&amp;lt;math&amp;gt;i=j\,\!&amp;lt;/math&amp;gt; is used to obtain the last equality.  Taking the inner product of a vector&lt;br /&gt;
with itself will get&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\Psi\mid\Psi\right\rangle = \sum_i\alpha^*_i\alpha_i = \sum_i|\alpha_i|^2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This immediately gives us a very important property of the inner&lt;br /&gt;
product.  It tells us that, in general,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle\Phi\mid\Phi\right\rangle \geq 0, \;\; \mbox{and} \;\; \left\langle\Phi\mid \Phi\right\rangle = 0&lt;br /&gt;
\Leftrightarrow \left\vert\Phi\right\rangle = 0. &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
(The symbol &amp;lt;math&amp;gt;\Leftrightarrow\,\!&amp;lt;/math&amp;gt; means &amp;lt;nowiki&amp;gt;&amp;quot;if and only if,&amp;quot;&amp;lt;/nowiki&amp;gt; sometimes written as &amp;lt;nowiki&amp;gt;&amp;quot;iff.&amp;quot;&amp;lt;/nowiki&amp;gt;)  &lt;br /&gt;
&lt;br /&gt;
We could also expand a vector in a different basis.  Let us suppose&lt;br /&gt;
that the set &amp;lt;math&amp;gt;\{\left\vert e_k \right\rangle\}\,\!&amp;lt;/math&amp;gt; is an orthonormal basis &amp;lt;math&amp;gt;(\left\langle e_k \mid e_l\right\rangle =&lt;br /&gt;
\delta_{kl})\,\!&amp;lt;/math&amp;gt; that is different from the one considered earlier.  We&lt;br /&gt;
could expand our vector &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt; in terms of our new basis by&lt;br /&gt;
expanding our new basis in terms of our old basis.  Let us first&lt;br /&gt;
expand the &amp;lt;math&amp;gt;\left\vert e_k\right\rangle\,\!&amp;lt;/math&amp;gt; in terms of the &amp;lt;math&amp;gt;\left\vert j\right\rangle\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert e_k\right\rangle= \sum_j \left\vert j\right\rangle \left\langle j\mid e_k\right\rangle,&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.19}}&lt;br /&gt;
so that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert \Psi\right\rangle &amp;amp;= \sum_j \alpha_j\left\vert j\right\rangle  \\&lt;br /&gt;
           &amp;amp;= \sum_{j}\sum_k\alpha_j\left\vert e_k \right\rangle \left\langle e_k \mid j\right\rangle  \\ &lt;br /&gt;
           &amp;amp;= \sum_k \alpha_k^\prime \left\vert e_k\right\rangle, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.20}}&lt;br /&gt;
where &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\alpha_k^\prime = \sum_j \alpha_j \left\langle e_k \mid j\right\rangle. &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.21}}&lt;br /&gt;
Notice that the insertion of &amp;lt;math&amp;gt;\sum_k\left\vert e_k\right\rangle\left\langle e_k\right\vert\,\!&amp;lt;/math&amp;gt; didn't do anything to our original vector; it is the same vector, just in a&lt;br /&gt;
different basis.  Therefore, this is effectively the identity operator,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbb{I} = \sum_k\left\vert e_k \right\rangle\left\langle e_k\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is an important and quite useful relation.  &lt;br /&gt;
To interpret Eq.[[#eqC.19|(C.19)]], we can draw a close&lt;br /&gt;
analogy with three-dimensional real vectors.  The inner product&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert e_k \right\rangle \left\langle j \right\vert\,\!&amp;lt;/math&amp;gt; can be interpreted as the projection of one vector onto&lt;br /&gt;
another.  This provides the part of &amp;lt;math&amp;gt;\left\vert j \right\rangle\,\!&amp;lt;/math&amp;gt; along &amp;lt;math&amp;gt;\left\vert e_k \right\rangle\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Transformations===&lt;br /&gt;
&lt;br /&gt;
Suppose we have two different orthogonal bases, &amp;lt;math&amp;gt;\{e_k\}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{j\}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
The numbers &amp;lt;math&amp;gt;\left\langle e_k\mid j\right\rangle\,\!&amp;lt;/math&amp;gt; for all the different &amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; are&lt;br /&gt;
often referred to as matrix elements since the set forms a matrix, with&lt;br /&gt;
&amp;lt;math&amp;gt;k\,\!&amp;lt;/math&amp;gt; labelling the rows and &amp;lt;math&amp;gt;j\,\!&amp;lt;/math&amp;gt; labelling the columns.  Thus we&lt;br /&gt;
can represent the transformation from one basis to another with a matrix&lt;br /&gt;
transformation.  Let &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; be the matrix with elements &amp;lt;math&amp;gt;m_{kj} =&lt;br /&gt;
\left\langle e_k\mid j\right\rangle\,\!&amp;lt;/math&amp;gt;.  The transformation from one basis to another,&lt;br /&gt;
written in terms of the coefficients of &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, is &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt; A^\prime = MA, &amp;lt;/math&amp;gt;|C.22}}&lt;br /&gt;
where &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A^\prime = \left(\begin{array}{c} \alpha_1^\prime \\ \alpha_2^\prime \\ \vdots \\&lt;br /&gt;
    \alpha_n^\prime \end{array}\right), \;\; &lt;br /&gt;
\mbox{ and } \;\;&lt;br /&gt;
A = \left(\begin{array}{c} \alpha_1 \\ \alpha_2 \\ \vdots \\&lt;br /&gt;
    \alpha_n\end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This sort of transformation is a change of basis.  Oftentimes when one vector is transformed to another the transformation can be viewed as a transformation of the components of the vector and is also represented by a matrix.  Thus transformations can either be&lt;br /&gt;
represented by the matrix equation, like Eq.[[#eqC.22|(C.22)]], or the components, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\alpha_k^\prime = \sum_j \alpha_j \left\langle e_k \mid j \right\rangle = \sum_j m_{kj}\alpha_j. &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.23}}&lt;br /&gt;
In the case that we consider a matrix transformation of basis elements, we call it a passive transformation.  (The transformation does nothing to the object, but only changes the basis in which the object is described.)  An active transformation is one where the object itself is transformed.  Often these two transformations, active and passive, are very simply related.  However, the distinction can be very important.  &lt;br /&gt;
&lt;br /&gt;
For a general transformation matrix &amp;lt;math&amp;gt;T\,\!&amp;lt;/math&amp;gt; acting on a vector,&lt;br /&gt;
the matrix elements in a particular basis &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
t_{ij} = \left\langle i\right\vert (T) \left\vert j\right\rangle, &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
just as elements of a vector can be found using&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\langle i\mid \Psi \right\rangle = \alpha_i.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Transformations of a Qubit====&lt;br /&gt;
&lt;br /&gt;
It is worth belaboring the point somewhat and presenting several ways in which to parametrize the set of transformations of a qubit.  A qubit state is represented by a complex two-dimensional vector that has been normalized to one:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert \psi\right\rangle = \alpha_0 \left\vert 0 \right\rangle + \alpha_1 \left\vert 1\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1 \end{array}\right), \;\;\;\; |\alpha_0|^2 + |\alpha_1|^2 = 1.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The most general matrix transformation that will take this to any other state of the same form (complex, 2-d vector with unit norm) is a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix.  In [[Chapter 2 - Qubits and Collections of Qubits|Chapter 2]], several specific examples of qubit transformations were given; in [[Chapter 3 - Physics of Quantum Information|Chapter 3]], [[Chapter 3 - Physics of Quantum Information#Measurements Revisited|Section 3.4]] it was stated that an element of SU(2) can be written as (see [[Chapter 3 - Physics of Quantum Information#Exponentian of a Matrix|Section 3.2.1, Exponentiation of a Matrix]])&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
U(\theta) &amp;amp;= \exp(-i\vec{n}\cdot\vec{\sigma} \theta/2) \\&lt;br /&gt;
          &amp;amp;= (\mathbb{I}\cos(\theta/2) -i\vec{n}\cdot\vec{\sigma} \sin(\theta/2))&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.24}}&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{n}\,\!&amp;lt;/math&amp;gt; is a unit vector, &amp;lt;math&amp;gt;|\vec{n}|=1\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\vec{n}\cdot\vec{\sigma} =&lt;br /&gt;
n_1\sigma_1+n_2\sigma_2+n_3\sigma_3\,\!&amp;lt;/math&amp;gt;. &lt;br /&gt;
Explicitly, this is &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
 \exp(-i\vec{n}\cdot\vec{\sigma} \theta/2) &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; 1 \end{array}\right)\cos(\theta/2) \\&lt;br /&gt;
                        &amp;amp; \;\;\;   + (-i)\left[ n_1\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; 1 \\ &lt;br /&gt;
                                  1 &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_2\left(\begin{array}{cc}&lt;br /&gt;
                                  0 &amp;amp; -i \\ &lt;br /&gt;
                                  i &amp;amp; 0 \end{array}\right)&lt;br /&gt;
                              + n_3\left(\begin{array}{cc}&lt;br /&gt;
                                  1 &amp;amp; 0 \\ &lt;br /&gt;
                                  0 &amp;amp; -1 \end{array}\right)\right]\sin(\theta/2) \\&lt;br /&gt;
                                &amp;amp;= &lt;br /&gt;
         \left(\begin{array}{cc}&lt;br /&gt;
  \cos(\theta/2) -in_3\sin(\theta/2) &amp;amp; (-in_1-n_2)\sin(\theta/2) \\ &lt;br /&gt;
   (-in_1+n_2)\sin(\theta/2) &amp;amp; \cos(\theta/2) +in_3\sin(\theta/2)  \end{array}\right).&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Notice that this is a ''special unitary matrix.''  (See Section [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Unitary Matrices]].)&lt;br /&gt;
To see that this is the most general SU(2) matrix, one needs to verify that any complex &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; unitary matrix can be written in this form.  (One way to do this is to start with a generic matrix and impose the restrictions.  Here one may simply convince oneself that this is general through observation by acting on basis vectors.)  This is the most general qubit transformation and can be interpreted as a rotation about the axis &amp;lt;math&amp;gt;\hat{n}\,\!&amp;lt;/math&amp;gt; by an angle &amp;lt;math&amp;gt;\theta\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
Another parametrization of this set of matrices is the following, called the Euler angle parametrization:&lt;br /&gt;
 {{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
U_{EA}   = \exp(-i\sigma_z \alpha/2) \exp(-i\sigma_y \beta/2) \exp(-i\sigma_z \gamma/2).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.25}}&lt;br /&gt;
In this case the matrices &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; are not unique.  Any two of the three Pauli matrices (or one of each) may be chosen.  This is quite simple, useful, and generalizable to SU(N) for N arbitrary.  In the simple case of a qubit, one may convince oneself by acting on basis vectors as before.  However, with a little thought, one may see that rotating to a position on the sphere by the first angle, followed by rotations using the other two, will provide for a general orientation of an object.&lt;br /&gt;
&lt;br /&gt;
====Similarity Transformation====&lt;br /&gt;
&lt;br /&gt;
A ''similarity transformation'' &amp;lt;!--\index{similarity transformation}--&amp;gt; &lt;br /&gt;
of an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; matrix &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; by an invertible matrix &amp;lt;math&amp;gt;S\,\!&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;S A S^{-1}\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
There are (at least) two important things to note about similarity&lt;br /&gt;
transformations: &lt;br /&gt;
#Similarity transformations leave the trace of a matrix unchanged.  This is shown explicitly in [[#The Trace|Section 3.5]].&lt;br /&gt;
#Similarity transformations leave the determinant of a matrix unchanged, or invariant.  This is because &amp;lt;center&amp;gt;&amp;lt;math&amp;gt; \det(SAS^{-1}) = \det(S)\det(A)\det(S^{-1}) =\det(S)\det(A)\frac{1}{\det(S)} = \det(A). \,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
#Simultaneous similarity transformations of matrices in an equation will leave the equation unchanged.  Let &amp;lt;math&amp;gt;A^\prime = SAS^{-1}\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B^\prime = SBS^{-1}\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;C^\prime = SCS^{-1}\,\!&amp;lt;/math&amp;gt;.  If &amp;lt;math&amp;gt;AB=C\,\!&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;A^\prime B^\prime = C^\prime\,\!&amp;lt;/math&amp;gt;, since &amp;lt;math&amp;gt;A^\prime B^\prime = SAS^{-1}SBS^{-1} = SABS^{-1} =  SCS^{-1}=C^\prime\,\!&amp;lt;/math&amp;gt;.  The two matrices &amp;lt;math&amp;gt;A^\prime\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; are said to be ''similar''.&lt;br /&gt;
&amp;lt;!-- \index{similar matrices} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Eigenvalues and Eigenvectors===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- \index{eigenvalues}\index{eigenvectors} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
A matrix can always be diagonalized.  By this, it is meant that for&lt;br /&gt;
every complex matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; there is a diagonal matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; such that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
M = UDV,  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.26}}&lt;br /&gt;
where &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; are unitary matrices.  This form is called a singular value decomposition of the matrix and the entries of the diagonal matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; are called the ''singular values'' &amp;lt;!--\index{singular values}--&amp;gt; &lt;br /&gt;
of the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;.  However, the singular values are not always easy to find.  &lt;br /&gt;
&lt;br /&gt;
For the special case that the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; is Hermitian &amp;lt;math&amp;gt;(M^\dagger = M)\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
the matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt; can be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
M = U D U^\dagger&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.27}}&lt;br /&gt;
where &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; is unitary &amp;lt;math&amp;gt;(U^{-1}=U^\dagger)\,\!&amp;lt;/math&amp;gt;.  In this case the elements&lt;br /&gt;
of the matrix &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; are called ''eigenvalues''. &amp;lt;!--\index{eigenvalues}--&amp;gt;&lt;br /&gt;
Very often eigenvalues are introduced as solutions to the equation&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M \left\vert v\right\rangle = \lambda \left\vert v\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;\left\vert v\right\rangle\,\!&amp;lt;/math&amp;gt; is an ''eigenvector''. &amp;lt;!--\index{eigenvector} --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To find the eigenvalues and eigenvectors of a matrix &amp;lt;math&amp;gt;M\,\!&amp;lt;/math&amp;gt;, we follow a&lt;br /&gt;
standard procedure which is to calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\lambda\mathbb{I} - M) = 0&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.28}}&lt;br /&gt;
and then solve for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  The different solutions for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt; is the&lt;br /&gt;
set of eigenvalues and is called the ''spectrum''. &amp;lt;!-- \index{spectrum}--&amp;gt; Let the different eigenvalues be denoted by &amp;lt;math&amp;gt;\lambda_i\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;i=1,2,...,n\,\!&amp;lt;/math&amp;gt; fo an &amp;lt;math&amp;gt;n\times n\,\!&amp;lt;/math&amp;gt; vector.  If two&lt;br /&gt;
eigenvalues are equal, we say the spectrum is &lt;br /&gt;
''degenerate''. &amp;lt;!--\index{degenerate}--&amp;gt; To find the&lt;br /&gt;
eigenvectors, which correspond to different eigenvalues, the equation &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M \left\vert v\right\rangle = \lambda_i \left\vert v\right\rangle&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
must be solved for each value of &amp;lt;math&amp;gt;i\,\!&amp;lt;/math&amp;gt;.  Notice that this equation&lt;br /&gt;
holds even if we multiply both sides by some complex number.  This&lt;br /&gt;
implies that an eigenvector can always be scaled.  Usually they are&lt;br /&gt;
normalized to obtain an orthonormal set.  As we will see by example,&lt;br /&gt;
degenerate eigenvalues require some care.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Example 1====&lt;br /&gt;
&lt;br /&gt;
Consider a &amp;lt;math&amp;gt;2\times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrix&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\sigma = \left(\begin{array}{cc} &lt;br /&gt;
               1+a &amp;amp; b-ic \\&lt;br /&gt;
              b+ic &amp;amp; 1-a  \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.29}}&lt;br /&gt;
To find the eigenvalues &amp;lt;!--\index{eigenvalues}--&amp;gt; &lt;br /&gt;
of this, we follow a standard procedure, which&lt;br /&gt;
is to calculate &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\det(\sigma-\lambda\mathbb{I}) = 0,&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.30}}&lt;br /&gt;
and solve for &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  The eigenvalues of this matrix are given by&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{cc} &lt;br /&gt;
               1+a-\lambda &amp;amp; b-ic \\&lt;br /&gt;
              b+ic &amp;amp; 1-a-\lambda  \end{array}\right) =0,  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
which implies that the eigenvalues are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_{\pm} = 1\pm \sqrt{a^2+b^2+c^2}&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
and the eigenvectors &amp;lt;!--\index{eigenvectors}--&amp;gt; are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_1=\left(\begin{array}{c}&lt;br /&gt;
        i\left(-a + c + \sqrt{a^2 + 4 b^2 - 2 ac + c^2} \right) \\ &lt;br /&gt;
        2b &lt;br /&gt;
        \end{array}\right), &lt;br /&gt;
v_2= \left(\begin{array}{c}&lt;br /&gt;
         i\left(-a + c - \sqrt {a^2 + 4 b^2 - 2 a c + c^2} \right)\\ &lt;br /&gt;
         2b \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
These expressions are useful for calculating properties of qubit&lt;br /&gt;
states as will be seen in the text.&lt;br /&gt;
&lt;br /&gt;
====Example 2====&lt;br /&gt;
&lt;br /&gt;
Now consider a &amp;lt;math&amp;gt;3\times 3\,\!&amp;lt;/math&amp;gt; matrix,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N= \left(\begin{array}{ccc}&lt;br /&gt;
              1 &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0 &amp;amp; 1 \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
First we calculate&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{ccc}&lt;br /&gt;
              1-\lambda &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i         &amp;amp; 1-\lambda  &amp;amp; 0 \\&lt;br /&gt;
              0         &amp;amp;       0    &amp;amp; 1-\lambda &lt;br /&gt;
           \end{array}\right) &lt;br /&gt;
    = (1-\lambda)[(1-\lambda)^2-1].&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This implies that the eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']] are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda = 1,0, \mbox{ or } 2.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda_1=1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\lambda_0 = 0\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\lambda_2 = 2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
To find eigenvectors, &amp;lt;!--\index{eigenvalues}--&amp;gt; we calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Nv &amp;amp;= \lambda v, \\&lt;br /&gt;
\left(\begin{array}{ccc}&lt;br /&gt;
              1 &amp;amp; -i &amp;amp; 0 \\&lt;br /&gt;
              i &amp;amp; 1 &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0 &amp;amp; 1 \end{array}\right)\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right) &amp;amp;= \lambda\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.31}}&lt;br /&gt;
for each &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
For &amp;lt;math&amp;gt;\lambda = 1\,\!&amp;lt;/math&amp;gt;, we get the following equations:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1 -iv_2 &amp;amp;= v_1, \\&lt;br /&gt;
iv_1+v_2 &amp;amp;= v_2,  \\&lt;br /&gt;
v_3 &amp;amp;= v_3. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.32}}&lt;br /&gt;
Solving this obtains &amp;lt;math&amp;gt;v_2 =0\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;v_1 =0\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;v_3\,\!&amp;lt;/math&amp;gt; is any non-zero number (which will be chosen to normalize the vector).  For &amp;lt;math&amp;gt;\lambda&lt;br /&gt;
=0\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1  &amp;amp;= iv_2, \\&lt;br /&gt;
v_3 &amp;amp;= 0. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.33}}&lt;br /&gt;
And finally, for &amp;lt;math&amp;gt;\lambda = 2\,\!&amp;lt;/math&amp;gt;, we obtain&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_1 -iv_2 &amp;amp;= 2v_1, \\&lt;br /&gt;
iv_1+v_2 &amp;amp;= 2v_2, \\&lt;br /&gt;
v_3 &amp;amp;= 2v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.34}}&lt;br /&gt;
so that &amp;lt;math&amp;gt;v_1 = -iv_2\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
Therefore, our three eigenvectors &amp;lt;!--\index{eigenvalues}--&amp;gt; are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_0 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} i \\ 1\\ 0 \end{array}\right), \; &lt;br /&gt;
v_1 = \left(\begin{array}{c} 0 \\ 0\\ 1 \end{array}\right), \; &lt;br /&gt;
v_2 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} -i \\ 1\\ 0 \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
The matrix &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
V= (v_0,v_1,v_2) = \left(\begin{array}{ccc}&lt;br /&gt;
              i/\sqrt{2} &amp;amp; 0     &amp;amp; -i/\sqrt{2} \\&lt;br /&gt;
              1/\sqrt{2} &amp;amp; 0     &amp;amp; 1/\sqrt{2} \\&lt;br /&gt;
              0          &amp;amp; 1     &amp;amp; 0 \end{array}\right)&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
is the matrix that diagonalizes &amp;lt;math&amp;gt;N\,\!&amp;lt;/math&amp;gt; in the following way:&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
N = VDV^\dagger&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
where&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
D = \left(\begin{array}{ccc}&lt;br /&gt;
              0 &amp;amp; 0  &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 1  &amp;amp; 0 \\&lt;br /&gt;
              0 &amp;amp; 0  &amp;amp; 2 \end{array}\right)&lt;br /&gt;
.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We may write this as&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
V^\dagger N V = D.   &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This is sometimes called the ''eigenvalue decompostion''&amp;lt;!--\index{eigenvalue decomposition}--&amp;gt;  of the matrix and can also be written as&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
N = \sum_i \lambda_i v_iv^\dagger_i.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|C.35}}&lt;br /&gt;
&lt;br /&gt;
====Example 3====&lt;br /&gt;
&lt;br /&gt;
Next, consider the complex &amp;lt;math&amp;gt;3\times 3&amp;lt;/math&amp;gt; Hermitian matrix &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
M = \left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2} &amp;amp; 0  &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0 &amp;amp; 2  &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2} &amp;amp; 0  &amp;amp; \frac{5}{2} \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
First we calculate&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\det\left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2}-\lambda &amp;amp; 0 &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0         &amp;amp; 2-\lambda  &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2}         &amp;amp;       0    &amp;amp; \frac{5}{2}-\lambda &lt;br /&gt;
           \end{array}\right) &lt;br /&gt;
    = (2-\lambda)\left[\left(\frac{5}{2}-\lambda\right)^2-\frac{1}{4}\right].&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
This implies that the eigenvalues [[Appendix C - Vectors and Linear Algebra#Eigenvalues and Eigenvectors|'''C.6''']] are &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda = 2,2, \mbox{ or } 3.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Note that there are two that are the same, or degenerate.  &lt;br /&gt;
Let &amp;lt;math&amp;gt;\lambda_1=2\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\lambda_2 = 2\,\!&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\lambda_3 = 3\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
To find eigenvectors, &amp;lt;!--\index{eigenvalues}--&amp;gt; we calculate&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
Mv &amp;amp;= \lambda v, \\&lt;br /&gt;
\left(\begin{array}{ccc}&lt;br /&gt;
              \frac{5}{2} &amp;amp; 0 &amp;amp; \frac{i}{2} \\&lt;br /&gt;
              0 &amp;amp; 2 &amp;amp; 0 \\&lt;br /&gt;
              -\frac{i}{2} &amp;amp; 0 &amp;amp; \frac{5}{2} \end{array}\right)\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right) &amp;amp;= \lambda\left(\begin{array}{c} v_1&lt;br /&gt;
              \\ v_2 \\ v_3 \end{array}\right)&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.36}}&lt;br /&gt;
for each &amp;lt;math&amp;gt;\lambda\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
For &amp;lt;math&amp;gt;\lambda = 3\,\!&amp;lt;/math&amp;gt;, we get the following equations:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{5}{2}v_1 + \frac{i}{2}v_3 &amp;amp;= 3v_1, \\&lt;br /&gt;
2v_2 &amp;amp;= 3v_2,  \\&lt;br /&gt;
-\frac{i}{2}v_1 + \frac{5}{2}v_3 &amp;amp;= 3v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.37}}&lt;br /&gt;
so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
iv_3  &amp;amp;= v_1, \\&lt;br /&gt;
v_2 &amp;amp;= 0. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.38}}&lt;br /&gt;
Now for &amp;lt;math&amp;gt;\lambda = 2\,\!&amp;lt;/math&amp;gt;, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\frac{5}{2}v_1 +\frac{i}{2}v_3 &amp;amp;= 2 v_1, \\&lt;br /&gt;
2v_2 &amp;amp;= 2v_2, \\&lt;br /&gt;
-\frac{i}{2}v_1 + \frac{5}{2}v_3 &amp;amp;= 2v_3, &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.39}}&lt;br /&gt;
so &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
v_3  &amp;amp;= iv_1, \\&lt;br /&gt;
v_2 &amp;amp;= \mbox{anything}. &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.40}}&lt;br /&gt;
We would like to have a set of orthonormal vectors.  (We can always choose the set to be orthonormal.)  We choose the three eigenvectors to be&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_2 = \left(\begin{array}{c} 1 \\ a \\ i&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
v_2^\prime = \left(\begin{array}{c} 1 \\ a^\prime \\ i&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
v_3 = \left(\begin{array}{c} i \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
We set the inner product of the two vectors &amp;lt;math&amp;gt; v_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; v_2^\prime \,\!&amp;lt;/math&amp;gt; equal to zero so as to have then be orthogonal: &lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
1 + a a^\prime +1 = 2 + a a^\prime = 0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Now we can choose &amp;lt;math&amp;gt; a = \sqrt{2}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt; a^\prime = -\sqrt{2}\,\!&amp;lt;/math&amp;gt; so that the normalized eigenvectors are&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
v_2 = \frac{1}{2}\left(\begin{array}{c} 1 \\ \sqrt{2} \\ i&lt;br /&gt;
  \end{array}\right), \;\;  &lt;br /&gt;
v_2^\prime = \frac{1}{2}\left(\begin{array}{c} 1 \\ -\sqrt{2} \\ i&lt;br /&gt;
  \end{array}\right), \;\;&lt;br /&gt;
v_3 = \frac{1}{\sqrt{2}}\left(\begin{array}{c} i \\ 0 \\ 1&lt;br /&gt;
  \end{array}\right).&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Tensor Products===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The tensor product, &amp;lt;!--\index{tensor product} --&amp;gt;&lt;br /&gt;
or the Kronecker product, &amp;lt;!--\index{Kronecker product}--&amp;gt;&lt;br /&gt;
is used extensively in quantum mechanics and&lt;br /&gt;
throughout the course.  It is commonly denoted with a &amp;lt;math&amp;gt;\otimes\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
symbol, although this is often left out.  In fact, the following&lt;br /&gt;
are commonly found in the literature as notation for the tensor&lt;br /&gt;
product of two vectors &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert\Phi\right\rangle\,\!&amp;lt;/math&amp;gt;:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\Psi\right\rangle\otimes\left\vert\Phi\right\rangle &amp;amp;= \left\vert\Psi\right\rangle\left\vert\Phi\right\rangle  \\&lt;br /&gt;
                         &amp;amp;= \left\vert\Psi\Phi\right\rangle.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.41}}&lt;br /&gt;
Each of these has its advantages and will all be used in&lt;br /&gt;
different circumstances in this text.  &lt;br /&gt;
&lt;br /&gt;
The tensor product is also often used for operators.  Several&lt;br /&gt;
examples &lt;br /&gt;
will be given, one that explicitly calculates the tensor product for&lt;br /&gt;
two vectors and one that calculates it for two matrices which could&lt;br /&gt;
represent operators.  However, these are not different in the sense&lt;br /&gt;
that a vector is a &amp;lt;math&amp;gt;1\times n\,\!&amp;lt;/math&amp;gt; or an &amp;lt;math&amp;gt;n\times 1\,\!&amp;lt;/math&amp;gt; matrix.  It is also&lt;br /&gt;
noteworthy that the two objects in the tensor product need not be of&lt;br /&gt;
the same type.  In general, a tensor product of an &amp;lt;math&amp;gt;n\times m\,\!&amp;lt;/math&amp;gt; object&lt;br /&gt;
(array) with a &amp;lt;math&amp;gt;p\times q\,\!&amp;lt;/math&amp;gt; object will produce an &amp;lt;math&amp;gt;np\times mq\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
object.  &lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
The tensor product of two objects is computed as follows.&lt;br /&gt;
Let &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt; be an &amp;lt;math&amp;gt;n\times m\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt; be a &amp;lt;math&amp;gt;p\times q\,\!&amp;lt;/math&amp;gt; array, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
A = \left(\begin{array}{cccc} &lt;br /&gt;
           a_{11} &amp;amp; a_{12} &amp;amp; \cdots &amp;amp; a_{1m} \\&lt;br /&gt;
           a_{21} &amp;amp; a_{22} &amp;amp; \cdots &amp;amp; a_{2m} \\&lt;br /&gt;
           \vdots &amp;amp;        &amp;amp; \ddots &amp;amp;      \\&lt;br /&gt;
           a_{n1} &amp;amp; a_{n2} &amp;amp; \cdots &amp;amp; a_{nm} \end{array}\right),&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.42}}&lt;br /&gt;
and similarly for &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;.  Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
A\otimes B = \left(\begin{array}{cccc} &lt;br /&gt;
             a_{11}B &amp;amp; a_{12}B &amp;amp; \cdots &amp;amp; a_{1m}B \\&lt;br /&gt;
             a_{21}B &amp;amp; a_{22}B &amp;amp; \cdots &amp;amp; a_{2m}B \\&lt;br /&gt;
             \vdots  &amp;amp;         &amp;amp; \ddots &amp;amp;      \\&lt;br /&gt;
             a_{n1}B &amp;amp; a_{n2}B &amp;amp; \cdots &amp;amp; a_{nm}B \end{array}\right).  &lt;br /&gt;
&amp;lt;/math&amp;gt;|C.43}}&lt;br /&gt;
&lt;br /&gt;
Let us now consider two examples.  First let &amp;lt;math&amp;gt;\left\vert\phi\right\rangle\,\!&amp;lt;/math&amp;gt; and&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt; be as before,&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
\left\vert\psi\right\rangle = \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) \;\; &lt;br /&gt;
\mbox{and} \;\; &lt;br /&gt;
\left\vert\phi\right\rangle = \left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\psi\right\rangle\otimes\left\vert\phi\right\rangle &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) &lt;br /&gt;
\otimes &lt;br /&gt;
\left(\begin{array}{c} \gamma \\ \delta&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{c} \alpha\gamma\\ &lt;br /&gt;
                                                     \alpha\delta \\&lt;br /&gt;
                                                     \beta\gamma \\ &lt;br /&gt;
                                                     \beta\delta &lt;br /&gt;
                                       \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.44}}&lt;br /&gt;
Also&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left\vert\psi\right\rangle\otimes\left\langle\phi\right\vert &amp;amp;= \left\vert\psi\right\rangle\left\langle\phi\right\vert \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{c} \alpha \\ \beta&lt;br /&gt;
  \end{array}\right) &lt;br /&gt;
\otimes &lt;br /&gt;
\left(\begin{array}{cc} \gamma^* &amp;amp; \delta^*&lt;br /&gt;
  \end{array}\right)   \\&lt;br /&gt;
                            &amp;amp;= \left(\begin{array}{cc}&lt;br /&gt;
                                \alpha\gamma^* &amp;amp; \alpha\delta^* \\&lt;br /&gt;
                                \beta\gamma^*  &amp;amp; \beta\delta^* &lt;br /&gt;
                                       \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.45}}&lt;br /&gt;
&lt;br /&gt;
Now consider two matrices&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;&lt;br /&gt;
A = \left(\begin{array}{cc} &lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d  \end{array}\right) &lt;br /&gt;
 \;\; \mbox{and} \;\;&lt;br /&gt;
B = \left(\begin{array}{cc} &lt;br /&gt;
               e &amp;amp; f \\&lt;br /&gt;
               g &amp;amp; h  \end{array}\right).  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
Then &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
A\otimes B &amp;amp;=  \left(\begin{array}{cc} &lt;br /&gt;
                 a &amp;amp; b \\&lt;br /&gt;
                 c &amp;amp; d  \end{array}\right) &lt;br /&gt;
 \otimes&lt;br /&gt;
               \left(\begin{array}{cc} &lt;br /&gt;
                 e &amp;amp; f \\&lt;br /&gt;
                 g &amp;amp; h  \end{array}\right)   \\  &lt;br /&gt;
           &amp;amp;=  \left(\begin{array}{cccc} &lt;br /&gt;
                 ae &amp;amp; af &amp;amp; be &amp;amp; bf \\&lt;br /&gt;
                 ag &amp;amp; ah &amp;amp; bg &amp;amp; bh \\&lt;br /&gt;
                 ce &amp;amp; cf &amp;amp; de &amp;amp; df \\&lt;br /&gt;
                 cg &amp;amp; ch &amp;amp; dg &amp;amp; dh \end{array}\right).  &lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|C.46}}&lt;br /&gt;
&lt;br /&gt;
====Properties of Tensor Products====&lt;br /&gt;
&lt;br /&gt;
Listed here are properties of tensor products that are useful, with &amp;lt;math&amp;gt;A\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;C\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;D\,\!&amp;lt;/math&amp;gt; of any type:&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)(C\otimes D) = AC \otimes BD\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)^T = A^T\otimes B^T\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)^* = A^*\otimes B^*\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A\otimes B)\otimes C = A\otimes(B\otimes C)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;(A+B) \otimes C = A\otimes C+B\otimes C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;A\otimes(B+C) = A\otimes B + A\otimes C\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
#&amp;lt;math&amp;gt;\mbox{Tr}(A\otimes B) = \mbox{Tr}(A)\mbox{Tr}(B)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
(See [[Bibliography#HornNJohnsonII|Horn and Johnson, Topics in Matrix Analysis]], Chapter 4.)&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1787</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1787"/>
		<updated>2012-01-05T08:35:55Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Many-qubit Circuits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase &amp;lt;math&amp;gt;C_{PHASE}\!&amp;lt;/math&amp;gt; gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;H(C_{PHASE})H = C_{NOT}\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1786</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1786"/>
		<updated>2012-01-05T08:35:23Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Many-qubit Circuits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase &amp;lt;math&amp;gt;C_{PHASE}\!&amp;lt;/math&amp;gt; gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;H(C_{PHASE})H = CNOT\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1785</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1785"/>
		<updated>2012-01-05T08:34:54Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Many-qubit Circuits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase &amp;lt;math&amp;gt;C_{PHASE},\!&amp;lt;/math&amp;gt; gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;H(C_{PHASE})H = CNOT\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1784</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1784"/>
		<updated>2012-01-05T08:34:00Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Many-qubit Circuits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase &amp;lt;math&amp;gt;C_{PHASE},\!/math&amp;gt; gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;H(C_{PHASE})H = CNOT\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1783</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1783"/>
		<updated>2012-01-05T08:33:04Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Many-qubit Circuits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase C_{PHASE} gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;H(C_{PHASE})H = CNOT\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1782</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1782"/>
		<updated>2012-01-05T08:32:29Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Many-qubit Circuits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase (CPHASE) gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;H(C_{PHASE})H = CNOT\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1781</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1781"/>
		<updated>2012-01-05T08:30:58Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Many-qubit Circuits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase (CPHASE) gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;H(CPHASE)H = CNOT\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1780</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1780"/>
		<updated>2012-01-05T08:30:44Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Many-qubit Circuits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase (CPHASE) gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;H)CPHASE)H = CNOT\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
	<entry>
		<id>https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1779</id>
		<title>Chapter 2 - Qubits and Collections of Qubits</title>
		<link rel="alternate" type="text/html" href="https://www2.physics.siu.edu/qunet/wiki/index.php?title=Chapter_2_-_Qubits_and_Collections_of_Qubits&amp;diff=1779"/>
		<updated>2012-01-05T08:30:14Z</updated>

		<summary type="html">&lt;p&gt;Anada: /* Many-qubit Circuits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;===Introduction===&lt;br /&gt;
&lt;br /&gt;
There are several parts to any quantum information processing task. Some of these were&lt;br /&gt;
written down and discussed by David DiVincenzo in the early days of quantum computing&lt;br /&gt;
research and are therefore called DiVincenzo’s requirements for quantum computing. These&lt;br /&gt;
include, but are not limited to, the following, which will be discussed in this chapter. Other&lt;br /&gt;
requirements will be discussed later.&lt;br /&gt;
&lt;br /&gt;
Five requirements [[Bibliography#qcrequirements|DiVincenzo:2000]]:&lt;br /&gt;
#Be a scalable physical system with well-defined qubits&lt;br /&gt;
#Be initializable to a simple fiducial state such as &amp;lt;math&amp;gt;\left\vert{000...}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
#Have much longer decoherence times than gating times&lt;br /&gt;
#Have a universal set of quantum gates&lt;br /&gt;
#Permit qubit-specific measurements&lt;br /&gt;
&lt;br /&gt;
The first requirement is a set of two-state quantum systems which can serve as qubits. The&lt;br /&gt;
second is to be able to initialize the set of qubits to some reference state. In this chapter,&lt;br /&gt;
these will be taken for granted. The third concerns noise and noise has become known by &lt;br /&gt;
the term decoherence. The term decoherence has had a more precise definition in the past,&lt;br /&gt;
but here it will usually be synonymous with noise. Noise and decoherence will be discussed in [[Chapter 6 - Noise in Quantum Systems|Chapter 6]].  This chapter is primarily concerned with the fifth of these criteria.  This will enable us to discuss many interesting aspects of quantum information problem while postponing some other technical details regarding the other criteria.&lt;br /&gt;
&lt;br /&gt;
===Qubit States===&lt;br /&gt;
&lt;br /&gt;
As mentioned in the introduction, a qubit, or quantum bit, is represented by a two-state&lt;br /&gt;
quantum system. It is referred to as a two-state quantum system, although there are many&lt;br /&gt;
physical examples of qubits which are represented by two different states of a quantum&lt;br /&gt;
system that has many available states. These two states are represented by the vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; and the qubit could be in the state &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt;, the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;, or a complex superposition of&lt;br /&gt;
these two. A qubit state which is an arbitrary superposition is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt; |2.1}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\alpha_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\alpha_1\,\!&amp;lt;/math&amp;gt; are complex numbers. Our objective is to use these two states to store and&lt;br /&gt;
manipulate information. If the state of the system is confined to one state, the other, or a&lt;br /&gt;
superposition of the two, then&lt;br /&gt;
&lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1.\,\!&amp;lt;/math&amp;gt; |2.2}}&lt;br /&gt;
&lt;br /&gt;
This means that this vector is normalized, i.e. its magnitude (or length) is one. The set of all such&lt;br /&gt;
vectors forms a two-dimensional complex (so four-dimensional real) vector space.&amp;lt;ref name=&amp;quot;test&amp;quot;&amp;gt;[[Appendix B - Complex Numbers|Appendix B]] contains a basic introduction to complex numbers.&amp;lt;/ref&amp;gt; The basis vectors for such a space are the two vectors &amp;lt;math&amp;gt;\left\vert{0}\right\rangle&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which are called ''computational basis'' states. These two basis states are represented by&lt;br /&gt;
 &lt;br /&gt;
{{Equation | &amp;lt;math&amp;gt;\left\vert{0}\right\rangle = \left(\begin{array}{c} 1 \\ 0\end{array}\right), \;\;\left\vert{1}\right\rangle = \left(\begin{array}{c} 0 \\ 1\end{array}\right).&amp;lt;/math&amp;gt; |2.3}}&lt;br /&gt;
&lt;br /&gt;
Thus, the qubit state can be rewritten as&lt;br /&gt;
&lt;br /&gt;
{{Equation |&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \left(\begin{array}{c} \alpha_0 \\ \alpha_1\end{array}\right).&amp;lt;/math&amp;gt; |2.4}}&lt;br /&gt;
&lt;br /&gt;
===Qubit Gates===&lt;br /&gt;
&lt;br /&gt;
During a computation, one qubit state will need to be taken to a different one. In fact,&lt;br /&gt;
any valid state should be able to be operated upon to obtain any other state. Since this&lt;br /&gt;
is a complex vector with magnitude one, the matrix transformation required for closed system&lt;br /&gt;
evolution is unitary. (See [[Appendix C - Vectors and Linear Algebra#Unitary Matrices|Appendix C, Sec. C.3.8]].) These unitary matrices, or unitary&lt;br /&gt;
transformations, as well as their generalization to many qubits, transform one complex&lt;br /&gt;
vector into another and are also called ''quantum gates'', or gating operations. Mathematically,&lt;br /&gt;
we may think of them as rotations of the complex vector and in some cases (but not all)&lt;br /&gt;
correspond to actual rotations of the physical system.&lt;br /&gt;
&lt;br /&gt;
====Circuit Diagrams for Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
Unitary transformations are represented in a circuit diagram with a box around the unitary&lt;br /&gt;
transformation. Consider a unitary transformation &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on a single qubit state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;. If the&lt;br /&gt;
result of the transformation is &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, we can then write&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = V\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.5}}&lt;br /&gt;
&lt;br /&gt;
The corresponding circuit diagram is shown in Fig. 2.1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:Vbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.1: Circuit diagram for a one-qubit gate that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt;. The input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the output, &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Notice that the diagram is read from left to right. This means that if two consecutive&lt;br /&gt;
gates are implemented, say &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; first and then &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt;, the equation reads:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle = UV\left\vert{\psi}\right\rangle.&amp;lt;/math&amp;gt;|2.6}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The circuit diagram will have the boxes in the reverse order from the equation, i.e.&lt;br /&gt;
&amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; on the left and &amp;lt;math&amp;gt;U&amp;lt;/math&amp;gt; on the right (refer to Fig. 2.2 below). While this is somewhat confusing, it is important to remember convention; circuit diagrams will become increasingly important as the number of operations grows larger.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|[[File:UVbox1qu.jpg]]&lt;br /&gt;
|&amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Figure 2.2: Circuit diagram for two one-qubit gates that implements the unitary transformation &amp;lt;math&amp;gt;V\,\!&amp;lt;/math&amp;gt; followed by another unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt;. Like the single gate, the input state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is on the left and the new output, &amp;lt;math&amp;gt;\left\vert{\psi^{\prime\prime}}\right\rangle&amp;lt;/math&amp;gt;, is on the right.&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Examples of Important Qubit Gates====&lt;br /&gt;
&lt;br /&gt;
There are, of course, an infinite number of possible unitary transformations that we could&lt;br /&gt;
implement on a single qubit since the set of unitary transformations can be parameterized by&lt;br /&gt;
three parameters. However, a single gate will contain a single unitary transformation, which&lt;br /&gt;
means that all three parameters are fixed. There are several such transformations that are&lt;br /&gt;
used repeatedly. For this reason, they are listed here along with their actions on a generic&lt;br /&gt;
state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle&amp;lt;/math&amp;gt;. Note that one could also completely define the transformation by&lt;br /&gt;
its action on a complete set of basis states.&lt;br /&gt;
&lt;br /&gt;
The following is called an &amp;lt;nowiki&amp;gt;“x”&amp;lt;/nowiki&amp;gt; gate, or a bit-flip, &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X = \left(\begin{array}{cc} 0 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.7}}&lt;br /&gt;
&lt;br /&gt;
Its action on a state &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle&amp;lt;/math&amp;gt; is to exchange the basis states,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;X\left\vert{\psi}\right\rangle = \alpha_0\left\vert{1}\right\rangle + \alpha_1\left\vert{0}\right\rangle,&amp;lt;/math&amp;gt;|2.8}}&lt;br /&gt;
&lt;br /&gt;
for this reason it is also sometimes called a NOT gate. However, this term will be avoided&lt;br /&gt;
because a general NOT gate does not exist for all quantum states. (It does work for all qubit&lt;br /&gt;
states, but this is a special case.)&lt;br /&gt;
&lt;br /&gt;
The next gate is called a ''phase gate'' or a “z” gate. It is also sometimes called a ''phase-flip'',&lt;br /&gt;
and is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z = \left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.9}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate is to introduce a sign change on the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle&amp;lt;/math&amp;gt; which can be seen&lt;br /&gt;
through&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Z\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle - \alpha_1\left\vert{1}\right\rangle,&amp;lt;/math&amp;gt;|2.10}}&lt;br /&gt;
&lt;br /&gt;
The term phase gate is also used for the more general transformation&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;P = \left(\begin{array}{cc} e^{i\theta} &amp;amp; 0 \\ &lt;br /&gt;
                                0       &amp;amp; e^{-i\theta} \end{array}\right).&amp;lt;/math&amp;gt;|2.11}}&lt;br /&gt;
&lt;br /&gt;
For this reason, the z-gate will either be called a “z-gate” or a phase-flip gate.&lt;br /&gt;
&lt;br /&gt;
Another gate closely related to these, is the “y” gate. This gate is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y =  \left(\begin{array}{cc} 0 &amp;amp; -i \\ &lt;br /&gt;
                      i &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.12}}&lt;br /&gt;
&lt;br /&gt;
The action of this gate on a state is&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;Y\left\vert{\psi}\right\rangle = -i\alpha_1\left\vert{0}\right\rangle +i \alpha_0\left\vert{1}\right\rangle &lt;br /&gt;
            = -i(\alpha_1\left\vert{0}\right\rangle - \alpha_0\left\vert{1}\right\rangle)&amp;lt;/math&amp;gt;|2.13}}&lt;br /&gt;
&lt;br /&gt;
From this last expression, it is clear that, up to an overall factor of &amp;lt;math&amp;gt;−i\,\!&amp;lt;/math&amp;gt;, this gate is the same&lt;br /&gt;
as acting on a state with both &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt; gates. However, the order matters, and it&lt;br /&gt;
should be noted that&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;XZ = -i Y,\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
whereas&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&amp;lt;math&amp;gt;ZX = i Y.\,\!&amp;lt;/math&amp;gt;&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The fact that the order matters should not be a surprise to anyone since matrices in general&lt;br /&gt;
do not commute. However, such a condition arises so often in quantum mechanics that the&lt;br /&gt;
difference between these two is given an expression and a name. The difference between the two is called the ''commutator'' and is denoted with a &amp;lt;math&amp;gt;[\cdot,\cdot]&amp;lt;/math&amp;gt;. That is, for any two matrices, &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B&amp;lt;/math&amp;gt;, the commutator is defined to be&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[A,B] = AB -BA.\,\!&amp;lt;/math&amp;gt;|2.14}}&lt;br /&gt;
For the two gates &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Z&amp;lt;/math&amp;gt;,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[X,Z] = -2iY.\,\!&amp;lt;/math&amp;gt;|2.15}}&lt;br /&gt;
A very important gate which is used in many quantum information processing protocols,&lt;br /&gt;
including quantum algorithms, is called the Hadamard gate,&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H = \frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 &amp;amp; 1 \\ &lt;br /&gt;
                      1 &amp;amp; -1 \end{array}\right).&amp;lt;/math&amp;gt;|2.16}}&lt;br /&gt;
In this case, its helpful to look at what this gate does to the two basis states:&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H \left\vert{0}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle + \left\vert{1}\right\rangle), &amp;lt;/math&amp;gt;&amp;lt;br /&amp;gt;&amp;lt;math&amp;gt;H \left\vert{1}\right\rangle = \frac{1}{\sqrt{2}}(\left\vert{0}\right\rangle - \left\vert{1}\right\rangle).&amp;lt;/math&amp;gt;|2.17}}&lt;br /&gt;
&lt;br /&gt;
So the Hadamard gate will take either one of the basis states and produce an equal superposition&lt;br /&gt;
of the two basis states; this is the reason it is so-often used in quantum information&lt;br /&gt;
processing tasks. On a generic state,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert{\psi}\right\rangle = [(\alpha_0+\alpha_1)\left\vert{0}\right\rangle + (\alpha_0-\alpha_1)\left\vert{1}\right\rangle].&amp;lt;/math&amp;gt;|2.18}}&lt;br /&gt;
&lt;br /&gt;
===The Pauli Matrices===&lt;br /&gt;
The three matrices &amp;lt;math&amp;gt;X,\,\!&amp;lt;/math&amp;gt; [[#eq2.7|Eq.(2.7)]] &amp;lt;math&amp;gt;Y,\,\!&amp;lt;/math&amp;gt; [[#eq2.12|Eq.(2.12)]]  and &amp;lt;math&amp;gt; Z \,\!&amp;lt;/math&amp;gt; [[#eq2.9|Eq.(2.9)]] are called the Pauli matrices. They are also sometimes denoted &amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;, or &amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt; respectively. They are ubiquitous in quantum computing and quantum information processing. This is because they, along with the &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
identity matrix, form a basis for the set of &amp;lt;math&amp;gt;2 \times 2\,\!&amp;lt;/math&amp;gt; Hermitian matrices and can be used to&lt;br /&gt;
describe all &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; unitary transformations as well. We will return to the latter point in the next chapter.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt; &amp;lt;div id=&amp;quot;Table2.1&amp;quot;&amp;gt;&amp;lt;big&amp;gt;'''TABLE 2.1'''&amp;lt;/big&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;10&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
|+ align=&amp;quot;bottom&amp;quot; |Table 2.1: ''The Pauli Matrices.  The table shows the Pauli matrices, three different, but common notations, and the action on a state.  The &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; is a binary digit, 0 or 1.''&lt;br /&gt;
|-&lt;br /&gt;
|Pauli Matrix&lt;br /&gt;
|Notation 1&lt;br /&gt;
|Notation 2&lt;br /&gt;
|Notation 3&lt;br /&gt;
|Action&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; 1 \\ 1 &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_x\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_1\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;X|x\rangle = |x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 0 &amp;amp; -i \\ i &amp;amp; 0 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y =iXZ\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_y\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_2\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Y|x\rangle = i(-1)^x|x\oplus 1\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\left(\begin{array}{cc} 1 &amp;amp; 0 \\ 0 &amp;amp; -1 \end{array}\right)\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_z\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;\sigma_3\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;math&amp;gt;Z|x\rangle = (-1)^x|x\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To show that they form a basis for &amp;lt;math&amp;gt;2 \times 2&amp;lt;/math&amp;gt; Hermitian matrices, note that any such matrix can be written in the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;A = \left(\begin{array}{cc} &lt;br /&gt;
                a_0+a_3  &amp;amp; a_1+ia_2 \\ &lt;br /&gt;
                a_1-ia_2 &amp;amp; a_0-a_3 \end{array}\right).&amp;lt;/math&amp;gt;|2.19}}&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;a_0\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_3\,\!&amp;lt;/math&amp;gt; are arbitrary, &amp;lt;math&amp;gt;a_0 + a_3\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_0 − a_3\,\!&amp;lt;/math&amp;gt; are abitrary too. This matrix can be written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}A &amp;amp;= a_0 \mathbb{I} + a_1X + a_2Y + a_3 Z \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + a_1\sigma_1 + a_2\sigma_2 + a_3 \sigma_3 \\&lt;br /&gt;
  &amp;amp;=  a_0 \mathbb{I} + \vec{a}\cdot\vec{\sigma}, \\&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;|2.20}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;\vec{a}\cdot\vec{\sigma} = \sum_{i=1}^3a_i\sigma_i\,\!&amp;lt;/math&amp;gt; is the &amp;quot;dot&lt;br /&gt;
product&amp;quot; between &amp;lt;math&amp;gt;\vec{a}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\vec{\sigma} = (\sigma_1,\sigma_2,\sigma_3)\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
An important and useful relationship between these is the following (which shows why&lt;br /&gt;
the latter notation above is so useful)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sigma_i\sigma_j = \mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k,&amp;lt;/math&amp;gt;|2.21}}&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt;i, j, k\,\!&amp;lt;/math&amp;gt; are numbers from the set &amp;lt;math&amp;gt;\{1, 2, 3\}\,\!&amp;lt;/math&amp;gt; and the definitions for &amp;lt;math&amp;gt;\delta_{ij}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk}\,\!&amp;lt;/math&amp;gt; are given&lt;br /&gt;
in Eqs. [[Appendix C - Vectors and Linear Algebra#eqC.17|(C.17)]] and [[Appendix C - Vectors and Linear Algebra#eqC.8|(C.8)]] respectively. The three matrices &amp;lt;math&amp;gt;\sigma_1, \sigma_2, \sigma_3\,\!&amp;lt;/math&amp;gt; are traceless Hermitian&lt;br /&gt;
matrices and they can be seen to be orthogonal using the so-called ''Hilbert-Schmidt inner product'', which is defined, for matrices &amp;lt;math&amp;gt; A\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B\,\!&amp;lt;/math&amp;gt;, as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(A,B) = \mbox{Tr}(A^\dagger B).&amp;lt;/math&amp;gt;|2.22}}&lt;br /&gt;
&lt;br /&gt;
The orthogonality for the set is then summarized as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\sigma_i,\sigma_j) = \mbox{Tr}(\sigma_i\sigma_j) = 2\delta_{ij}.\,\!&amp;lt;/math&amp;gt;|2.23}}&lt;br /&gt;
&lt;br /&gt;
This property is contained in Eq. [[#eq2.21|(2.21)]]. This one equation also contains all of the commutators.&lt;br /&gt;
Subtracting the equation with the product reversed,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = (\mathbb{I}\delta_{ij} +i \epsilon_{ijk}\sigma_k) &lt;br /&gt;
                      -(\mathbb{I}\delta_{ji} +i \epsilon_{jik}\sigma_k),&amp;lt;/math&amp;gt;|2.24}}&lt;br /&gt;
&lt;br /&gt;
but &amp;lt;math&amp;gt;\delta_{ij}=\delta_{ji}\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\epsilon_{ijk} = -\epsilon_{jik}\,\!&amp;lt;/math&amp;gt;.  This can now be simplified,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;[\sigma_i,\sigma_j] = 2i \epsilon_{ijk}\sigma_k.\,\!&amp;lt;/math&amp;gt;|2.25}}&lt;br /&gt;
&lt;br /&gt;
===States of Many Qubits===&lt;br /&gt;
Let us now consider the states of several (or many) qubits. For one qubit, there are two&lt;br /&gt;
possible basis states, say &amp;lt;math&amp;gt;\left\vert{0}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. If there are two qubits, each with these basis states,&lt;br /&gt;
basis states for the two together are found by using the tensor product. (See Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]].)&lt;br /&gt;
The set of basis states obtained in this way is&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{0}\right\rangle\otimes\left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{1}\right\rangle\otimes\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle\otimes\left\vert{1}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This set is more often written in short-hand notation as (again see Appendix C, [[Appendix C - Vectors and Linear Algebra#Tensor Products|Section C.7]] for details and examples)&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{00}\right\rangle, \; \left\vert{01}\right\rangle, \;&lt;br /&gt;
  \left\vert{10}\right\rangle, \; \left\vert{11}\right\rangle \right\},\,\!&amp;lt;/math&amp;gt;|2.26}}&lt;br /&gt;
&lt;br /&gt;
which can also be expressed as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left(\begin{array}{c} 1 \\ 0 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right), \; &lt;br /&gt;
       \left(\begin{array}{c} 0 \\ 0 \\ 0 \\ 1 \end{array}\right)&lt;br /&gt;
\right\}.\,\!&amp;lt;/math&amp;gt;|2.27}}&lt;br /&gt;
&lt;br /&gt;
The extension to three qubits is straight-forward,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{000}\right\rangle, \; \left\vert{001}\right\rangle, \;&lt;br /&gt;
  \left\vert{010}\right\rangle, \; \left\vert{011}\right\rangle, \; \left\vert{100}\right\rangle, \; \left\vert{101}\right\rangle, \;&lt;br /&gt;
  \left\vert{110}\right\rangle, \; \left\vert{111}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.28}}&lt;br /&gt;
&lt;br /&gt;
Those familiar with binary will recognize these as the numbers zero through seven. Thus we&lt;br /&gt;
consider this an ''ordered basis''.  Thus, they can also be acceptably presented as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\{\left\vert{0}\right\rangle, \; \left\vert{1}\right\rangle, \;&lt;br /&gt;
  \left\vert{2}\right\rangle, \; \left\vert{3}\right\rangle, \; \left\vert{4}\right\rangle, \; \left\vert{5}\right\rangle, \;&lt;br /&gt;
  \left\vert{6}\right\rangle, \; \left\vert{7}\right\rangle \right\}.\,\!&amp;lt;/math&amp;gt;|2.29}}&lt;br /&gt;
&lt;br /&gt;
The ordering of the products is important because each spot&lt;br /&gt;
corresponds to a physical particle or physical system.  When some&lt;br /&gt;
confusion may arise, we may also label the ket with a subscript to&lt;br /&gt;
denote the particle or position.  For example, two different people,&lt;br /&gt;
Alice and Bob, can be used to represent distant parties that may&lt;br /&gt;
share some information or wish to communicate.  In this case, the&lt;br /&gt;
state belonging to Alice can be denoted &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_A\,\!&amp;lt;/math&amp;gt;.  Or if she is&lt;br /&gt;
referred to as party 1 or particle 1, &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle_1\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The most general 2-qubit state is written as&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_{00}\left\vert{00}\right\rangle + \alpha_{01}\left\vert{01}\right\rangle &lt;br /&gt;
             + \alpha_{10}\left\vert{10}\right\rangle + \alpha_{11}\left\vert{11}\right\rangle &lt;br /&gt;
           =\left(\begin{array}{c} \alpha_{00} \\ \alpha_{01} \\ &lt;br /&gt;
                                   \alpha_{10} \\ \alpha_{11} \end{array}\right).&amp;lt;/math&amp;gt;|2.30}}&lt;br /&gt;
&lt;br /&gt;
The normalization condition is &lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_{00}|^2  + |\alpha_{01}|^2&lt;br /&gt;
             + |\alpha_{10}|^2 + |\alpha_{11}|^2=1.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
The generalization to an arbitrary number of qubits, say &amp;lt;math&amp;gt;n\,\!&amp;lt;/math&amp;gt;, is also&lt;br /&gt;
rather straight-forward and can be written as &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \sum_{i=0}^{2^n-1} \alpha_i\left\vert{i}\right\rangle.\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Quantum Gates for Many Qubits===&lt;br /&gt;
&lt;br /&gt;
Just as the case for one single qubit, the most general closed-system transformation of a&lt;br /&gt;
state of many qubits is a unitary transformation. Being able to make an arbitrary unitary&lt;br /&gt;
transformation on many qubits is an important task. If an arbitrary unitary transformation&lt;br /&gt;
on a set of qubits can be made, then any quantum gate can be implemented. If this ability to&lt;br /&gt;
implement any arbitrary quantum gate can be accomplished using a particular set of quantum&lt;br /&gt;
gates, that set is said to be a ''universal set of gates'' or that the condition of ''universality'' has&lt;br /&gt;
been met by this set. It turns out that there is a theorem which provides one way for&lt;br /&gt;
identifying a universal set of gates.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''&lt;br /&gt;
&lt;br /&gt;
''The ability to implement an entangling gate between any two qubits, plus the ability to implement all single-qubit unitary transformations, will enable universal quantum computing.''&lt;br /&gt;
&lt;br /&gt;
It turns out that one doesn’t need to be able to perform an entangling gate between&lt;br /&gt;
distant qubits; nearest-neighbor interactions are sufficient. We can transfer the state of a&lt;br /&gt;
qubit to a qubit that is next to the one we would like it to interact with, then perform&lt;br /&gt;
the entangling gate between the two and then transfer back.&lt;br /&gt;
&lt;br /&gt;
This is an important and often used theorem which will be the main focus of the next&lt;br /&gt;
few sections. A particular class of two-qubit gates which can be used to entangle qubits will&lt;br /&gt;
be discussed along with circuit diagrams for many qubits.&lt;br /&gt;
&lt;br /&gt;
====Controlled Operations====&lt;br /&gt;
&lt;br /&gt;
A controlled operation is one that is conditioned on the state of another part of the system, usually a qubit. The most cited example is the CNOT (controlled NOT) gate, which flips one (target) bit if another qubit is in the state &lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;; thus it is controlled NOT operation for qubits. This gate is used often enough to warrant detailed discussion here.&lt;br /&gt;
&lt;br /&gt;
Consider the following matrix operation on two qubits:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;C_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; 1 &amp;amp; 0 \end{array}\right).&amp;lt;/math&amp;gt;|2.31}}&lt;br /&gt;
&lt;br /&gt;
Under this transformation, the following changes occur:&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{array}{c|c}&lt;br /&gt;
         \; \left\vert{\psi}\right\rangle\; &amp;amp; C_{12}\left\vert{\psi}\right\rangle \\ \hline&lt;br /&gt;
                \left\vert{00}\right\rangle &amp;amp; \left\vert{00}\right\rangle \\&lt;br /&gt;
                \left\vert{01}\right\rangle &amp;amp; \left\vert{01}\right\rangle \\&lt;br /&gt;
                \left\vert{10}\right\rangle &amp;amp; \left\vert{11}\right\rangle \\&lt;br /&gt;
                \left\vert{11}\right\rangle &amp;amp; \left\vert{10}\right\rangle &lt;br /&gt;
\end{array}&amp;lt;/math&amp;gt;|2.32}}&lt;br /&gt;
&lt;br /&gt;
This transformation is called the CNOT, or controlled NOT, since the second bit is flipped&lt;br /&gt;
if the first is in the state &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt; and otherwise left alone. The circuit diagram for this transformation corresponds to the following representation of the gate. Let &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y\,\!&amp;lt;/math&amp;gt; be zero or one.&lt;br /&gt;
The CNOT is then given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{x}\right\rangle_{i}\left\vert{y}\right\rangle_{j} \overset{CNOT}{\rightarrow} \left\vert{x}\right\rangle_{i}\left\vert{x\oplus y}\right\rangle_{j}.&amp;lt;/math&amp;gt;|2.33}}&lt;br /&gt;
&lt;br /&gt;
In binary, of course &amp;lt;math&amp;gt;0\oplus 0 =0&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;0\oplus 1 = 1 = 1\oplus 0&amp;lt;/math&amp;gt;, and&lt;br /&gt;
&amp;lt;math&amp;gt;1\oplus 1 =0&amp;lt;/math&amp;gt;.  The circuit diagram is given in Fig. 2.3 below. &lt;br /&gt;
The first qubit at the top of the diagam, &amp;lt;math&amp;gt;\left\vert{x}\right\rangle&amp;lt;/math&amp;gt;, is called the&lt;br /&gt;
''control bit'' while the one below, &amp;lt;math&amp;gt;\left\vert{y}\right\rangle&amp;lt;/math&amp;gt;, is called the ''target bit''.&lt;br /&gt;
&lt;br /&gt;
[[File:CNOT.jpg|center|400px]]&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
Figure 2.3: Circuit diagram for a CNOT gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
One can immediately generalize the operation of the CNOT to a controlled-U gate. This&lt;br /&gt;
is a gate, shown in Fig. 2.4, which implements a unitary transformation &amp;lt;math&amp;gt;U\,\!&amp;lt;/math&amp;gt; on the second&lt;br /&gt;
qubit, if the state of the first is &amp;lt;math&amp;gt;\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;. The matrix transformation is given by&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;CU_{12} = \left(\begin{array}{cccc}&lt;br /&gt;
                 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
                 0 &amp;amp; 0 &amp;amp; u_{21} &amp;amp; u_{22} \end{array}\right),&amp;lt;/math&amp;gt;|2.34}}&lt;br /&gt;
&lt;br /&gt;
where the matrix&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
&amp;lt;math&amp;gt;U = \left(\begin{array}{cc}&lt;br /&gt;
          u_{11} &amp;amp; u_{12} \\&lt;br /&gt;
          u_{21} &amp;amp; u_{22} \end{array}\right).&amp;lt;/math&amp;gt;&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For example the controlled-phase gate is given in [[#Figure 2.5|Fig. 2.5]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CU.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.4: Circuit diagram for a CU gate.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Many-qubit Circuits====&lt;br /&gt;
&lt;br /&gt;
Many qubit circuits are a straight-forward generalization of the single quibit circuit diagrams.&lt;br /&gt;
For example, Fig. 2.6 shows the implementation of CNOT&amp;lt;math&amp;gt;_{14}&amp;lt;/math&amp;gt; and CNOT&amp;lt;math&amp;gt;_{23}&amp;lt;/math&amp;gt; in the&lt;br /&gt;
same diagram. The crossing of lines is not confusing since there is a target and control&lt;br /&gt;
which are clearly distinguished in each case.&lt;br /&gt;
&lt;br /&gt;
It is quite interesting however, that as the diagrams become more complicated, the possibility&lt;br /&gt;
arises that one may change between equivalent forms of a circuit that, in the end,&lt;br /&gt;
&amp;lt;div id =&amp;quot;Figure 2.5&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:CP.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.5: Circuit diagram for a Controlled-phase (CPHASE) gate.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Multiqcs.jpg]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.6: Multiple CNOT gates on a set of qubits.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
implements the same multiple-qubit unitary. For example, noting that &amp;lt;math&amp;gt;HCPHASEH = CNOT\,\!&amp;lt;/math&amp;gt;, the two&lt;br /&gt;
circuits in Fig. 2.7 implement the same two-qubit unitary transformation. This enables the&lt;br /&gt;
simplication of some quite complicated circuits.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:Hzhequiv.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.7: Two circuits which are equivalent since they implement the same two-qubit&lt;br /&gt;
unitary transformation.&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Measurement===&lt;br /&gt;
&lt;br /&gt;
Measurement in quantum mechanics is quite different from that of&lt;br /&gt;
classical mechanics.  In classical mechanics (and computing), one assumes that a measurement&lt;br /&gt;
can be made at will without disturbing or changing the state of the&lt;br /&gt;
physical system.  In quantum mechanics, this assumption cannot be&lt;br /&gt;
made.  This is important for a variety of reasons that will become&lt;br /&gt;
clear later.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====Standard Prescription====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the introduction a simple example was provided to distinguish quantum states from classical states.  This example of &lt;br /&gt;
two wells with one particle can (with caution) be used here as well.  &lt;br /&gt;
&lt;br /&gt;
Consider the quantum state in a superposition of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
of the form&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert\psi\right\rangle = \alpha_0\left\vert 0\right\rangle +&lt;br /&gt;
    \alpha_1\left\vert 1\right\rangle,&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.35}}&lt;br /&gt;
&lt;br /&gt;
with &amp;lt;math&amp;gt;|\alpha_0|^2 + |\alpha_1|^2 = 1\,\!&amp;lt;/math&amp;gt;.  If the state is measured in&lt;br /&gt;
the computational basis, the result will be &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probability &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt;.  As always, it is important to note that it is not in either of the computational bases but a superposition of the two.&lt;br /&gt;
&lt;br /&gt;
This can be easily shown by acting on the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; with a Hadamard transformation,&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;H\left\vert \psi\right\rangle = \left\vert 0\right\rangle.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.36}}&lt;br /&gt;
&lt;br /&gt;
This state, produced from a unitary transformation of &amp;lt;math&amp;gt;\left\vert\psi\right\rangle\,\!&amp;lt;/math&amp;gt;, has probability &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; and probability &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt; of being in the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt;.  If it were in one or the other, then acting on the state with a Hadamard transformation would give some probability of it being in &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and some probability of being in &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt;. (This argument is so&lt;br /&gt;
simple and pointed that it was taken almost word-for-word from  [[Bibliography#Mermin:qcbook|Mermin's book]], page 27.)  &lt;br /&gt;
&lt;br /&gt;
A measurement in the computational basis is said to project this state into either the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; or the state &amp;lt;math&amp;gt;\left\vert 1\right\rangle\,\!&amp;lt;/math&amp;gt; with probabilities &amp;lt;math&amp;gt;|\alpha_0|^2\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|\alpha_1|^2\,\!&amp;lt;/math&amp;gt; respectively.  To understand this as a projection, consider the following way in which the &amp;lt;math&amp;gt;0\,\!&amp;lt;/math&amp;gt; -component of the state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is found.  The state &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt; is projected onto the the state &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; mathematically by taking the [[Index#I|inner product]] (see [[Appendix C - Vectors and Linear Algebra#More Dirac Notation|Section C.4]]) of &amp;lt;math&amp;gt;\left\vert 0\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\vert \psi\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle 0\mid  \psi\right\rangle = \alpha_0.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.37}}&lt;br /&gt;
&lt;br /&gt;
Notice that this is a complex number and that its complex conjugate&lt;br /&gt;
can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi \mid 0\right\rangle = \alpha_0^*.&lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.38}}&lt;br /&gt;
&lt;br /&gt;
Therefore the probability can be expressed as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\langle\psi\mid 0 \right\rangle \left\langle 0\mid\psi\right\rangle = \left\vert\left\langle &lt;br /&gt;
  0\mid \psi\right\rangle \right\vert^2.\,\!&amp;lt;/math&amp;gt;|2.39}}&lt;br /&gt;
&lt;br /&gt;
Now consider a multiple-qubit system with state &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert \Psi\right\rangle = \sum_i \alpha_i\left\vert i\right\rangle.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
The result of a measurement is a projection and the&lt;br /&gt;
state is projected onto the basis state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; with probability&lt;br /&gt;
&amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt; ---the same properties are true of this more general&lt;br /&gt;
system.  &lt;br /&gt;
&lt;br /&gt;
To summarize, if a measurement is made on the system &amp;lt;math&amp;gt;\left\vert\Psi\right\rangle\,\!&amp;lt;/math&amp;gt;, the&lt;br /&gt;
result &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt; is obtained with probability &amp;lt;math&amp;gt;|\alpha_i|^2\,\!&amp;lt;/math&amp;gt;.&lt;br /&gt;
Assuming that &amp;lt;math&amp;gt;\left\vert i\right\rangle \,\!&amp;lt;/math&amp;gt; results from the measurement, the state of the&lt;br /&gt;
system has been projected into the state &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  Therefore, the&lt;br /&gt;
state of the system immediately after the measurement is &amp;lt;math&amp;gt;\left\vert i\right\rangle\,\!&amp;lt;/math&amp;gt;.  &lt;br /&gt;
&lt;br /&gt;
A circuit diagram with a measurement represented by a box with an&lt;br /&gt;
arrow is given in Figure 2.8.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementcd.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.8: The circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
An alternative is to put an &amp;lt;nowiki&amp;gt;&amp;quot;M&amp;quot;&amp;lt;/nowiki&amp;gt; inside the box.  This is shown in Fig. 2.9.  &lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurementM.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.9: An alternative circuit diagram for a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an example, the measurement result can be used for input for another state.  The unitary transform&lt;br /&gt;
in Figure 2.10 is one that depends upon the outcome of the&lt;br /&gt;
measurement.  Notice that the information input, since it is&lt;br /&gt;
classical, is represented by a double line.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[File:measurement.jpg‎]]&amp;lt;br /&amp;gt;&lt;br /&gt;
Figure 2.10: A circuit which includes a measurement.  &lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Projection Operators====&lt;br /&gt;
&lt;br /&gt;
Projection operators are used quite often and the description of&lt;br /&gt;
measurement in the previous section is a good example of how they are&lt;br /&gt;
used.  One may ask, what is a projector?  In ordinary&lt;br /&gt;
three-dimensional space, a vector is written as &lt;br /&gt;
&amp;lt;math&amp;gt;\vec v=v_x\hat{x}+v_y\hat{y}+v_z\hat{z}\,\!&amp;lt;/math&amp;gt; and the &amp;lt;math&amp;gt;x\,\!&amp;lt;/math&amp;gt; part of the&lt;br /&gt;
vector can be obtained by &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x}\cdot\vec v) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.40}}&lt;br /&gt;
&lt;br /&gt;
This is the part of the vector lying along the x axis.  Notice that if&lt;br /&gt;
the projection is performed again, the same result is obtained&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\hat{x}(\hat{x} \cdot v_x\hat{x}) = v_x\hat{x}.\,\!&amp;lt;/math&amp;gt;|2.41}}&lt;br /&gt;
&lt;br /&gt;
This is (the) characteristic of projection operations.  When one is&lt;br /&gt;
performed twice, the second result is the same as the first.  &lt;br /&gt;
&lt;br /&gt;
This can be extended to the complex vectors in quantum mechanics.  The&lt;br /&gt;
outer product &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\!\!\left\langle{x}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector.  For example,&lt;br /&gt;
&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert\,\!&amp;lt;/math&amp;gt; is a projector and can be written in matrix form as &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert = \left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.42}}&lt;br /&gt;
&lt;br /&gt;
Acting with this on &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle = \alpha_0\left\vert{0}\right\rangle + \alpha_1\left\vert{1}\right\rangle\,\!&amp;lt;/math&amp;gt;&lt;br /&gt;
gives&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
           \alpha_1 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.43}}&lt;br /&gt;
&lt;br /&gt;
Acting again produces&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  0  \end{array}\right) &lt;br /&gt;
    \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
              0 &lt;br /&gt;
         \end{array}\right) &lt;br /&gt;
=     \left(\begin{array}{c}&lt;br /&gt;
           \alpha_0 \\&lt;br /&gt;
             0 &lt;br /&gt;
         \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.44}}&lt;br /&gt;
&lt;br /&gt;
This is due to the fact that&lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;(\left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert)^2 = \left\vert{0}\right\rangle\!\!\left\langle{0}\right\vert.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.45}}&lt;br /&gt;
&lt;br /&gt;
In fact, this property essentially defines a projection.  A projection is&lt;br /&gt;
a linear transformation &amp;lt;math&amp;gt;P\,\!&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;P^2 = P\,\!&amp;lt;/math&amp;gt;. Much of our intuition about geometric projections in&lt;br /&gt;
three-dimensions carries to the more abstract cases.  One important&lt;br /&gt;
example is that the sum over all projections is the identity. The&lt;br /&gt;
generalization to arbitrary dimensions, where &amp;lt;math&amp;gt;\left\vert{i}\right\rangle\,\!&amp;lt;/math&amp;gt; is any basis&lt;br /&gt;
vector in that space, is immediate.  In this case the identity,&lt;br /&gt;
expressed as a sum over all projectors, is &lt;br /&gt;
&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\sum_{i} \left\vert{i}\right\rangle\!\!\left\langle{i}\right\vert = 1.  &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.46}}&lt;br /&gt;
&lt;br /&gt;
====Phase in/Phase out====&lt;br /&gt;
&lt;br /&gt;
The probability of finding the system in the state &amp;lt;math&amp;gt;\left\vert{x}\right\rangle\,\!&amp;lt;/math&amp;gt;,&lt;br /&gt;
where &amp;lt;math&amp;gt;x=0\,\!&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;1\,\!&amp;lt;/math&amp;gt;, is&lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi}\right\rangle}(\left\vert{x}\right\rangle) &amp;amp;= \left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.47}}&lt;br /&gt;
Note that &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\left\langle{\psi}\right\vert\,\!&amp;lt;/math&amp;gt; both appear in this&lt;br /&gt;
expression. So if &amp;lt;math&amp;gt;\left\vert{\psi^\prime}\right\rangle = e^{-i\theta}\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; were &lt;br /&gt;
substituted into the expression for &amp;lt;math&amp;gt;\mbox{Prob}(\left\vert{x}\right\rangle)\,\!&amp;lt;/math&amp;gt;, then the&lt;br /&gt;
expression is unchanged, &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mbox{Prob}_{\left\vert{\psi^\prime}\right\rangle}(\left\vert{x}\right\rangle) &lt;br /&gt;
                     &amp;amp;= \left\langle{\psi^\prime}\mid{x}\right\rangle\left\langle{x}\mid{\psi^\prime}\right\rangle \\&lt;br /&gt;
                     &amp;amp;= e^{-i\theta}\left\langle{\psi}\mid{x}\right\rangle\left\langle{x}\mid{\psi}\right\rangle e^{i\theta} \\&lt;br /&gt;
                     &amp;amp;= |\left\langle{\psi}\mid{x}\right\rangle|^2.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;|2.48}}&lt;br /&gt;
Therefore when &amp;lt;math&amp;gt;\left\vert{\psi}\right\rangle\,\!&amp;lt;/math&amp;gt; changes by a phase, there is no effect on&lt;br /&gt;
this probability.  This is why it is often said that &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;\left(\begin{array}{cc}&lt;br /&gt;
         e^{i\theta} &amp;amp; 0 \\&lt;br /&gt;
               0  &amp;amp; e^{-i\theta}  \end{array}\right) &lt;br /&gt;
= e^{i\theta}\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-i2\theta}  \end{array}\right) &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.49}}&lt;br /&gt;
is equivalent to &lt;br /&gt;
{{Equation|&amp;lt;math&amp;gt;&lt;br /&gt;
\left(\begin{array}{cc}&lt;br /&gt;
           1  &amp;amp;  0 \\&lt;br /&gt;
           0  &amp;amp;  e^{-2i\theta}  \end{array}\right). &lt;br /&gt;
\,\!&amp;lt;/math&amp;gt;|2.50}}&lt;br /&gt;
&lt;br /&gt;
However, there are times when a phase can make a difference. In&lt;br /&gt;
those cases it is really a ''relative'' phase between two states that makes the difference. This will become clear later on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Chapter 3 - Physics of Quantum Information#Introduction|Continue to '''Chapter 3 - Physics of Quantum Information''']]&lt;br /&gt;
&lt;br /&gt;
==Footnotes==&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Anada</name></author>
		
	</entry>
</feed>