• Wyszukiwanie zaawansowane
  • Kategorie
  • Kategorie BISAC
  • Książki na zamówienie
  • Promocje
  • Granty
  • Książka na prezent
  • Opinie
  • Pomoc
  • Załóż konto
  • Zaloguj się

Deep Learning for Physical Scientists: Accelerating Research with Machine Learning » książka

zaloguj się | załóż konto
Logo Krainaksiazek.pl

koszyk

konto

szukaj
topmenu
Księgarnia internetowa
Szukaj
Książki na zamówienie
Promocje
Granty
Książka na prezent
Moje konto
Pomoc
 
 
Wyszukiwanie zaawansowane
Pusty koszyk
Bezpłatna dostawa dla zamówień powyżej 20 złBezpłatna dostawa dla zamówień powyżej 20 zł

Kategorie główne

• Nauka
 [2944077]
• Literatura piękna
 [1814251]

  więcej...
• Turystyka
 [70679]
• Informatyka
 [151074]
• Komiksy
 [35590]
• Encyklopedie
 [23169]
• Dziecięca
 [611005]
• Hobby
 [136031]
• AudioBooki
 [1718]
• Literatura faktu
 [225599]
• Muzyka CD
 [379]
• Słowniki
 [2916]
• Inne
 [443741]
• Kalendarze
 [1187]
• Podręczniki
 [166463]
• Poradniki
 [469211]
• Religia
 [506887]
• Czasopisma
 [481]
• Sport
 [61343]
• Sztuka
 [242115]
• CD, DVD, Video
 [3348]
• Technologie
 [219293]
• Zdrowie
 [98602]
• Książkowe Klimaty
 [124]
• Zabawki
 [2385]
• Puzzle, gry
 [3504]
• Literatura w języku ukraińskim
 [260]
• Art. papiernicze i szkolne
 [7151]
Kategorie szczegółowe BISAC

Deep Learning for Physical Scientists: Accelerating Research with Machine Learning

ISBN-13: 9781119408338 / Angielski / Twarda / 2021 / 208 str.

Edward O. Pyzer-Knapp
Deep Learning for Physical Scientists: Accelerating Research with Machine Learning Edward O. Pyzer-Knapp   9781119408338 Wiley-Blackwell (an imprint of John Wiley & S - książkaWidoczna okładka, to zdjęcie poglądowe, a rzeczywista szata graficzna może różnić się od prezentowanej.

Deep Learning for Physical Scientists: Accelerating Research with Machine Learning

ISBN-13: 9781119408338 / Angielski / Twarda / 2021 / 208 str.

Edward O. Pyzer-Knapp
cena 319,38
(netto: 304,17 VAT:  5%)

Najniższa cena z 30 dni: 314,83
Termin realizacji zamówienia:
ok. 30 dni roboczych.

Darmowa dostawa!

This book introduces the reader to the transformative techniques involved in deep learning. A range of methodologies are addressed including: * Basic classification and regression with perceptrons * Training algorithms, such as back propagation and stochastic gradient descent and the parallelization of training * Multi-Layer Perceptrons for learning from descriptors, and de-noising data * Recurrent neural networks for learning from sequences * Convolutional neural networks for learning from images * Bayesian optimization for tuning deep learning architectures Each of these areas has direct application to physical science research, and by the end of the book, the reader should feel comfortable enough to select the methodology which is best for their situation, and be able to implement and interpret outcome of the deep learning model. The book is designed to teach researchers to think in new ways, providing them with new avenues to attack problems, and avoid roadblocks within their research. This is achieved through the inclusion of case-study like problems at the end of each chapter, which will give the reader a chance to practice what they have just learnt in a close-to-real-world setting, with example `solutions' provided through an online resource.

Kategorie:
Nauka, Chemia
Kategorie BISAC:
Science > Chemia - Fizyczna
Wydawca:
Wiley-Blackwell (an imprint of John Wiley & S
Język:
Angielski
ISBN-13:
9781119408338
Rok wydania:
2021
Ilość stron:
208
Waga:
0.45 kg
Wymiary:
22.86 x 15.24 x 1.27
Oprawa:
Twarda
Wolumenów:
01

About the Authors xiAcknowledgements xii1 Prefix - Learning to "Think Deep" 11.1 So What Do I Mean by Changing the Way You Think? 22 Setting Up a Python Environment for Deep Learning Projects 52.1 Python Overview 52.2 Why Use Python for Data Science? 62.3 Anaconda Python 72.3.1 Why Use Anaconda? 72.3.2 Downloading and Installing Anaconda Python 72.3.2.1 Installing TensorFlow 92.4 Jupyter Notebooks 102.4.1 Why Use a Notebook? 102.4.2 Starting a Jupyter Notebook Server 112.4.3 Adding Markdown to Notebooks 122.4.4 A Simple Plotting Example 142.4.5 Summary 163 Modelling Basics 173.1 Introduction 173.2 Start Where You Mean to Go On - Input Definition and Creation 173.3 Loss Functions 183.3.1 Classification and Regression 193.3.2 Regression Loss Functions 193.3.2.1 Mean Absolute Error 193.3.2.2 Root Mean Squared Error 193.3.3 Classification Loss Functions 203.3.3.1 Precision 213.3.3.2 Recall 213.3.3.3 F1 Score 223.3.3.4 Confusion Matrix 223.3.3.5 (Area Under) Receiver Operator Curve (AU-ROC) 233.3.3.6 Cross Entropy 253.4 Overfitting and Underfitting 283.4.1 Bias-Variance Trade-Off 293.5 Regularisation 313.5.1 Ridge Regression 313.5.2 LASSO Regularisation 333.5.3 Elastic Net 343.5.4 Bagging and Model Averaging 343.6 Evaluating a Model 353.6.1 Holdout Testing 353.6.2 Cross Validation 363.7 The Curse of Dimensionality 373.7.1 Normalising Inputs and Targets 373.8 Summary 39Notes 394 Feedforward Networks and Multilayered Perceptrons 414.1 Introduction 414.2 The Single Perceptron 414.2.1 Training a Perceptron 414.2.2 Activation Functions 424.2.3 Back Propagation 434.2.3.1 Weight Initialisation 454.2.3.2 Learning Rate 464.2.4 Key Assumptions 464.2.5 Putting It All Together in TensorFlow 474.3 Moving to a Deep Network 494.4 Vanishing Gradients and Other "Deep" Problems 534.4.1 Gradient Clipping 544.4.2 Non-saturating Activation Functions 544.4.2.1 ReLU 544.4.2.2 Leaky ReLU 564.4.2.3 ELU 574.4.3 More Complex Initialisation Schemes 574.4.3.1 Xavier 584.4.3.2 He 584.4.4 Mini Batching 594.5 Improving the Optimisation 604.5.1 Bias 604.5.2 Momentum 634.5.3 Nesterov Momentum 634.5.4 (Adaptive) Learning Rates 634.5.5 AdaGrad 644.5.6 RMSProp 654.5.7 Adam 654.5.8 Regularisation 664.5.9 Early Stopping 664.5.10 Dropout 684.6 Parallelisation of learning 694.6.1 Hogwild! 694.7 High and Low-level Tensorflow APIs 704.8 Architecture Implementations 724.9 Summary 734.10 Papers to Read 735 Recurrent Neural Networks 775.1 Introduction 775.2 Basic Recurrent Neural Networks 775.2.1 Training a Basic RNN 785.2.2 Putting It All Together in TensorFlow 795.2.3 The Problem with Vanilla RNNs 815.3 Long Short-Term Memory (LSTM) Networks 825.3.1 Forget Gate 825.3.2 Input Gate 845.3.3 Output Gate 845.3.4 Peephole Connections 855.3.5 Putting It All Together in TensorFlow 865.4 Gated Recurrent Units 875.4.1 Putting It All Together in TensorFlow 885.5 Using Keras for RNNs 885.6 Real World Implementations 895.7 Summary 895.8 Papers to Read 906 Convolutional Neural Networks 936.1 Introduction 936.2 Fundamental Principles of Convolutional Neural Networks 946.2.1 Convolution 946.2.2 Pooling 956.2.2.1 Why Use Pooling? 956.2.2.2 Types of Pooling 966.2.3 Stride and Padding 996.2.4 Sparse Connectivity 1016.2.5 Parameter Sharing 1016.2.6 Convolutional Neural Networks with TensorFlow 1026.3 Graph Convolutional Networks 1036.3.1 Graph Convolutional Networks in Practice 1046.4 Real World Implementations 1076.5 Summary 1086.6 Papers to Read 1087 Auto-Encoders 1117.1 Introduction 1117.1.1 Auto-Encoders for Dimensionality Reduction 1117.2 Getting a Good Start - Stacked Auto-Encoders, Restricted Boltzmann Machines, and Pretraining 1157.2.1 Restricted Boltzmann Machines 1157.2.2 Stacking Restricted Boltzmann Machines 1187.3 Denoising Auto-Encoders 1207.4 Variational Auto-Encoders 1217.5 Sequence to Sequence Learning 1257.6 The Attention Mechanism 1267.7 Application in Chemistry: Building a Molecular Generator 1277.8 Summary 1327.9 Real World Implementations 1327.10 Papers to Read 1328 Optimising Models Using Bayesian Optimisation 1358.1 Introduction 1358.2 Defining Our Function 1358.3 Grid and Random Search 1368.4 Moving Towards an Intelligent Search 1378.5 Exploration and Exploitation 1378.6 Greedy Search 1388.6.1 Key Fact One - Exploitation Heavy Search is Susceptible to Initial Data Bias 1398.7 Diversity Search 1418.8 Bayesian Optimisation 1428.8.1 Domain Knowledge (or Prior) 1428.8.2 Gaussian Processes 1458.8.3 Kernels 1468.8.3.1 Stationary Kernels 1468.8.3.2 Noise Kernel 1478.8.4 Combining Gaussian Process Prediction and Optimisation 1498.8.4.1 Probability of Improvement 1498.8.4.2 Expected Improvement 1508.8.5 Balancing Exploration and Exploitation 1518.8.6 Upper and Lower Confidence Bound Algorithm 1518.8.7 Maximum Entropy Sampling 1528.8.8 Optimising the Acquisition Function 1538.8.9 Cost Sensitive Bayesian Optimisation 1558.8.10 Constrained Bayesian Optimisation 1588.8.11 Parallel Bayesian Optimisation 1588.8.11.1 qEI 1588.8.11.2 Constant Liar and Kriging Believer 1608.8.11.3 Local Penalisation 1628.8.11.4 Parallel Thompson Sampling 1628.8.11.5 K-Means Batch Bayesian Optimisation 1628.9 Summary 1638.10 Papers to Read 163Case Study 1 Solubility Prediction Case Study 167CS 1.1 Step 1 - Import Packages 167CS 1.2 Step 2 - Importing the Data 168CS 1.3 Step 3 - Creating the Inputs 168CS 1.4 Step 4 - Splitting into Training and Testing 168CS 1.5 Step 5 - Defining Our Model 169CS 1.6 Step 6 - Running Our Model 169CS 1.7 Step 7 - Automatically Finding an Optimised Architecture Using Bayesian Optimisation 170Case Study 2 Time Series Forecasting with LSTMs 173CS 2.1 Simple LSTM 173CS 2.2 Sequence-to-Sequence LSTM 177Case Study 3 Deep Embeddings for Auto-Encoder-Based Featurisation 185Index 190

Dr Edward O. Pyzer-Knapp is the worldwide lead for AI Enriched Modelling and Simulation at IBM Research. Previously, he obtained his PhD from the University of Cambridge using state of the art computational techniques to accelerate materials design then moving to Harvard where he was in charge of the day-to-day running of the Harvard Clean Energy Project - a collaboration with IBM which combined massive distributed computing, quantum-mechanical simulations, and machine-learning to accelerate discovery of the next generation of organic photovoltaic materials. He is also the Visiting Professor of Industrially Applied AI at the University of Liverpool, and the Editor in Chief for Applied AI Letters, a journal with a focus on real-world application and validation of AI.Dr Matt Benatan received his PhD in Audio-Visual Speech Processing from the University of Leeds, after which he went on to pursue a career in AI research within industry. His work to date has involved the research and development of AI techniques for a broad variety of domains, from applications in audio processing through to materials discovery. His research interests include Computer Vision, Signal Processing, Bayesian Optimization, and Scalable Bayesian Inference.



Udostępnij

Facebook - konto krainaksiazek.pl



Opinie o Krainaksiazek.pl na Opineo.pl

Partner Mybenefit

Krainaksiazek.pl w programie rzetelna firma Krainaksiaze.pl - płatności przez paypal

Czytaj nas na:

Facebook - krainaksiazek.pl
  • książki na zamówienie
  • granty
  • książka na prezent
  • kontakt
  • pomoc
  • opinie
  • regulamin
  • polityka prywatności

Zobacz:

  • Księgarnia czeska

  • Wydawnictwo Książkowe Klimaty

1997-2026 DolnySlask.com Agencja Internetowa

© 1997-2022 krainaksiazek.pl
     
KONTAKT | REGULAMIN | POLITYKA PRYWATNOŚCI | USTAWIENIA PRYWATNOŚCI
Zobacz: Księgarnia Czeska | Wydawnictwo Książkowe Klimaty | Mapa strony | Lista autorów
KrainaKsiazek.PL - Księgarnia Internetowa
Polityka prywatnosci - link
Krainaksiazek.pl - płatnośc Przelewy24
Przechowalnia Przechowalnia