ISBN-13: 9783031442254 / Angielski
ISBN-13: 9783031442254 / Angielski
Preface 7
1.1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
I Quantum machine learning and Tensorflow* 11
2 Introduction 13
2.1 Fusion between QM and NN . . . . . . . . . . . . . . . . . . . . . 13
2.2 The quantum advantage in boson sampling and NN . . . . . . . 13
2.3 The background of a quantum engineer . . . . . . . . . . . . . . 13
2.4 Impact on the foundation of quantum mechanics . . . . . . . . . 15
3 Quantum hardware 17
4 Review on quantum machine learning and related 19
4.1 Neural networks in physics beyond quantum mechanics . . . . . . 20
4.2 Further readings . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5 Coding fundamentals 21
5.1 Matrix manipulation in Python . . . . . . . . . . . . . . . . . . . 21
5.2 What is Tensorflow . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.3 Tensor and variables in Tensorflow . . . . . . . . . . . . . . . . 21
5.4 Objects in Tensorflow . . . . . . . . . . . . . . . . . . . . . . . . 21
5.5 Models in Tensorflow . . . . . . . . . . . . . . . . . . . . . . . . . 21
5.5.1 Automatic Graph building . . . . . . . . . . . . . . . . . . 21
5.5.2 Automatic differentiation . . . . . . . . . . . . . . . . . . 216 Neural networks model 23
6.1 Examples by tensorflow . . . . . . . . . . . . . . . . . . . . . . . 23
7 Reservoir computing 25
7.1 Examples by tensorflow . . . . . . . . . . . . . . . . . . . . . . . 25
II Neural networks and phase space 27
8 Phase-space representation 29
8.1 The characteristic function with real variables . . . . . . . . . . . 30
8.2 Gaussian states . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
8.3 Vacuum state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
8.4 Coherent state . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
9 Linear transformations 35
9.1 The U and M matrices* . . . . . . . . . . . . . . . . . . . . . . . 36
9.2 Generating a symplectic matrix for a random medium . . . . . . 39
10 Gaussian density matrix as a neural network layer 41
10.1 The vacuum layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
11 Pullback 45
11.1 Pullback of Gaussian states . . . . . . . . . . . . . . . . . . . . . 46
11.2 Coding the linear layer . . . . . . . . . . . . . . . . . . . . . . . . 46
11.3 Pullback cascading . . . . . . . . . . . . . . . . . . . . . . . . . . 48
11.4 The Glauber displacement layer . . . . . . . . . . . . . . . . . . . 51
11.5 A linear layer for a complex medium . . . . . . . . . . . . . . . . 52
12 Quantum reservoir computing examples 55
12.1 Observables as derivatives of χ . . . . . . . . . . . . . . . . . . . 55
12.2 A coherent state in a complex medium . . . . . . . . . . . . . . . 56
12.3 Training a complex medium for an arbitrary coherent state . . . 57
12.3.1 Training to fit a target characteristic function . . . . . . . 59
12.3.2 Training by first derivatives . . . . . . . . . . . . . . . . . 61
12.3.3 Training by second derivatives . . . . . . . . . . . . . . . 63
12.3.4 The CovarianceLayer . . . . . . . . . . . . . . . . . . . . 63
12.4 Proof of Eq. (12.3) . . . . . . . . . . . . . . . . . . . . . . . . . . 65
12.5 Two trainable media and a reservoir . . . . . . . . . . . . . . . . 66
12.6 Phase modulator . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
12.7 Training phase modulators . . . . . . . . . . . . . . . . . . . . . . 69
III Non classical states 71
13 Introduction 73
13.1 The generalized symplectic operator . . . . . . . . . . . . . . . . 73
14 Squeezing 75
14.1 Single Mode Squeezed state . . . . . . . . . . . . . . . . . . . . . 75
14.1.1 Symplectic representation for the squeezing . . . . . . . . 75
14.2 Multi-mode squeezed vacuum NN model . . . . . . . . . . . . . . 76
14.3 Covariance matrix and squeezing . . . . . . . . . . . . . . . . . . 78
14.4 Squeezed coherent states . . . . . . . . . . . . . . . . . . . . . . . 79
14.4.1 Displacing the squeezed vacuum . . . . . . . . . . . . . . 79
14.4.2 Squeezing the displaced vacuum . . . . . . . . . . . . . . 80
14.5 Two-mode squeezing layer . . . . . . . . . . . . . . . . . . . . . . 82
15 Beam splitters and detection 87
15.1 Beam splitter layer . . . . . . . . . . . . . . . . . . . . . . . . . . 87
15.2 Photon counter layer . . . . . . . . . . . . . . . . . . . . . . . . . 90
15.3 Homodyne detection . . . . . . . . . . . . . . . . . . . . . . . . . 94
15.4 Measuring the expected value of the quadrature operator . . . . 96
16 Uncertainties 99
16.1 The Heisenberg layer . . . . . . . . . . . . . . . . . . . . . . . . . 99
16.2 Heisenberg layer for general states . . . . . . . . . . . . . . . . . 100
16.2.1 The LaplacianLayer . . . . . . . . . . . . . . . . . . . . 100
16.2.2 The BiharmonicLayer . . . . . . . . . . . . . . . . . . . . 102
16.2.3 Using the BiharmonicLayer in the HeisenbergLayer . . 104
16.3 Heisenberg layer for Gaussian states . . . . . . . . . . . . . . . . 106
16.4 Testing the HeinsenbergLayer with a squeezed state . . . . . . . 108
16.4.1 Proof of equations (16.4) and (16.5)∗ and (16.9)∗ . . . . . 109
17 The DifferentialGaussianLayer 113
17.1 Uncertainties in Homodyne detection . . . . . . . . . . . . . . . . 113
17.2 Testing the DifferentialGaussianLayer on coherent state . . . 117
17.3 Using DifferentialGaussianLayer in homodyne detection . . . 119
17.3.1 Proof of Eqs. (17.3) and (17.5) . . . . . . . . . . . . . . . 120
18 Entanglement 121
18.1 Using beam splitters as entangler . . . . . . . . . . . . . . . . . . 121
18.2 Two squeezed states in a beam splitter . . . . . . . . . . . . . . . 121
18.3 Computing the entanglement . . . . . . . . . . . . . . . . . . . . 122
18.4 Training the model to maximize the entanglement . . . . . . . . 125
IV Gaussian Boson Sampling 127
19 Boson sampling introduction 129
20 Boson sampling 131
20.1 Boson sampling in a single mode . . . . . . . . . . . . . . . . . . 131
20.2 Boson sampling with many modes . . . . . . . . . . . . . . . . . 132
21 Simple cases 135
21.1 Using the Hafnian to compute . . . . . . . . . . . . . . . . . . . . 135
22 Machine learning implementation with functional approach 137
22.1 The Q-transform function . . . . . . . . . . . . . . . . . . . . . . 137
22.2 The multiderivative operator . . . . . . . . . . . . . . . . . . . . 140
22.3 Single mode coherent state . . . . . . . . . . . . . . . . . . . . . 142
22.4 Single mode squeezed vacuum state . . . . . . . . . . . . . . . . 144
22.5 Multimode coherent case . . . . . . . . . . . . . . . . . . . . . . . 144
22.6 A coherent mode and a squeezed mode . . . . . . . . . . . . . . . 148
22.7 A squeezed mode and a coherent mode in a random interferometer151
22.8 Using the functional approach to evaluate the derivatives . . . . 151
23 Testing the Boson sample protocol with Haar unitary 155
23.1 The Haar random layer . . . . . . . . . . . . . . . . . . . . . . . 155
23.2 A model with a varying number of layers . . . . . . . . . . . . . 157
23.3 Generating the sampling patterns . . . . . . . . . . . . . . . . . . 157
23.4 Computing the pattern probability . . . . . . . . . . . . . . . . . 158
24 Training a complex medium to enhance multiparticle events 163
24.1 Training by squeezing parameters . . . . . . . . . . . . . . . . . . 173
24.2 Training by linear interferometer . . . . . . . . . . . . . . . . . . 173
24.3 Training by displacing operators . . . . . . . . . . . . . . . . . . 173
V Programming a real quantum computer* 175
25 Introduction 177
26 Xanadu X8 hardware 179
27 Xanadu X8 model 181
28 Xanadu X8 training 183
VI Using NN to minimize many-body Hamiltonians* 185
VII Conclusions and future work 187
VIII Appendices* 189
Claudio Conti is an associate professor at the Department of Physics of the University Sapienza of Rome. He authored over 250 articles in many fields, such as quantum physics, photonics, nonlinear science, biophysics, and complexity. His activity includes experiments and theory, such as the first observation of replica symmetry breaking mentioned in the scientific background of the Nobel prize in physics in 2021, the investigation of neuromorphic computing by quantum fluids, and the optical acceleration of natural language processing. Claudio Conti coordinates an experimental and theoretical group in Rome exploring the frontiers of artificial intelligence and physics.
This book presents a new way of thinking about quantum mechanics and machine learning by merging the two. Quantum mechanics and machine learning may seem theoretically disparate, but their link becomes clear through the density matrix operator which can be readily approximated by neural network models, permitting a formulation of quantum physics in which physical observables can be computed via neural networks. As well as demonstrating the natural affinity of quantum physics and machine learning, this viewpoint opens rich possibilities in terms of computation, efficient hardware, and scalability. One can also obtain trainable models to optimize applications and fine-tune theories, such as approximation of the ground state in many body systems, and boosting quantum circuits’ performance. The book begins with the introduction of programming tools and basic concepts of machine learning, with necessary background material from quantum mechanics and quantum information also provided. This enables the basic building blocks, neural network models for vacuum states, to be introduced. The highlights that follow include: non-classical state representations, with squeezers and beam splitters used to implement the primary layers for quantum computing; boson sampling with neural network models; an overview of available quantum computing platforms, their models, and their programming; and neural network models as a variational ansatz for many-body Hamiltonian ground states with applications to Ising machines and solitons. The book emphasizes coding, with many open source examples in Python and TensorFlow, while MATLAB and Mathematica routines clarify and validate proofs. This book is essential reading for graduate students and researchers who want to develop both the requisite physics and coding knowledge to understand the rich interplay of quantum mechanics and machine learning.
1997-2024 DolnySlask.com Agencja Internetowa