Librería Portfolio Librería Portfolio

Búsqueda avanzada

TIENE EN SU CESTA DE LA COMPRA

0 productos

en total 0,00 €

MACHINE LEARNING. A BAYESIAN AND OPTIMIZATION PERSPECTIVE
Título:
MACHINE LEARNING. A BAYESIAN AND OPTIMIZATION PERSPECTIVE
Subtítulo:
Autor:
THEODORIDIS, S
Editorial:
ACADEMIC PRESS
Año de edición:
2015
Materia
INTELIGENCIA ARTIFICIAL - GENERAL
ISBN:
978-0-12-801522-3
Páginas:
1062
75,95 €

 

Sinopsis

Key Features

All major classical techniques: Mean/Least-Squares regression and filtering, Kalman filtering, stochastic approximation and online learning, Bayesian classification, decision trees, logistic regression and boosting methods.
The latest trends: Sparsity, convex analysis and optimization, online distributed algorithms, learning in RKH spaces, Bayesian inference, graphical and hidden Markov models, particle filtering, deep learning, dictionary learning and latent variables modeling.
Case studies - protein folding prediction, optical character recognition, text authorship identification, fMRI data analysis, change point detection, hyperspectral image unmixing, target localization, channel equalization and echo cancellation, show how the theory can be applied.
MATLAB code for all the main algorithms are available on an accompanying website, enabling the reader to experiment with the code.
Description

This tutorial text gives a unifying perspective on machine learning by covering both probabilistic and deterministic approaches -which are based on optimization techniques - together with the Bayesian inference approach, whose essence lies in the use of a hierarchy of probabilistic models.

The book presents the major machine learning methods as they have been developed in different disciplines, such as statistics, statistical and adaptive signal processing and computer science. Focusing on the physical reasoning behind the mathematics, all the various methods and techniques are explained in depth, supported by examples and problems, giving an invaluable resource to the student and researcher for understanding and applying machine learning concepts.
The book builds carefully from the basic classical methods to the most recent trends, with chapters written to be as self-contained as possible, making the text suitable for different courses: pattern recognition, statistical/adaptive signal processing, statistical/Bayesian learning, as well as short courses on sparse modeling, deep learning, and probabilistic graphical models.



Preface
Acknowledgments
Notation
Dedication
Chapter 1: Introduction
Abstract
1.1 What Machine Learning is About
1.2 Structure and a Road Map of the Book
Chapter 2: Probability and Stochastic Processes
Abstract
2.1 Introduction
2.2 Probability and Random Variables
2.3 Examples of Distributions
2.4 Stochastic Processes
2.5 Information Theory
2.6 Stochastic Convergence
Problems
Chapter 3: Learning in Parametric Modeling: Basic Concepts and Directions
Abstract
3.1 Introduction
3.2 Parameter Estimation: The Deterministic Point of View
3.3 Linear Regression
3.4 Classification
3.5 Biased Versus Unbiased Estimation
3.6 The Cramér-Rao Lower Bound
3.7 Sufficient Statistic
3.8 Regularization
3.9 The Bias-Variance Dilemma
3.10 Maximum Likelihood Method
3.11 Bayesian Inference
3.12 Curse of Dimensionality
3.13 Validation
3.14 Expected and Empirical Loss Functions
3.15 Nonparametric Modeling and Estimation
Problems
Chapter 4: Mean-Square Error Linear Estimation
Abstract
4.1 Introduction
4.2 Mean-Square Error Linear Estimation: The Normal Equations
Chapter 5: Stochastic Gradient Descent: The LMS Algorithm and its Family
Abstract
5.1 Introduction
5.2 The Steepest Descent Method
5.3 Application to the Mean-Square Error Cost Function
5.4 Stochastic Approximation
5.5 The Least-Mean-Squares Adaptive Algorithm
5.6 The Affine Projection Algorithm
5.7 The Complex-Valued Case
5.8 Relatives of the LMS
5.9 Simulation Examples
5.10 Adaptive Decision Feedback Equalization
5.11 The Linearly Constrained LMS
5.12 Tracking Performance of the LMS in Nonstationary Environments
5.13 Distributed Learning: The Distributed LMS
5.14 A Case Study: Target Localization
5.15 Some Concluding Remarks: Consensus Matrix
Problems
MATLAB Exercises
Chapter 6: The Least-Squares Family
Abstract
6.1 Introduction
6.2 Least-Squares Linear Regression: A Geometric Perspective
6.3 Statistical Properties of the LS Estimator
6.4 Orthogonalizing the Column Space of X: The SVD Method
6.5 Ridge Regression
6.6 The Recursive Least-Squares Algorithm
6.7 Newton's Iterative Minimization Method
6.8 Steady-State Performance of the RLS
6.9 Complex-Valued Data: The Widely Linear RLS
6.10 Computational Aspects of the LS Solution
6.11 The Coordinate and Cyclic Coordinate Descent Methods
6.12 Simulation Examples
6.13 Total-Least-Squares
Problems
Chapter 7: Classification: A Tour of the Classics
Abstract
7.1 Introduction
7.2 Bayesian Classification
7.3 Decision (Hyper)Surfaces
7.4 The Naive Bayes Classifier
7.5 The Nearest Neighbor Rule
7.6 Logistic Regression
7.7 Fisher's Linear Discriminant
7.8 Classification Trees
7.9 Combining Classifiers
7.10 The Boosting Approach
7.11 Boosting Trees
7.12 A Case Study: Protein Folding Prediction
Problems
Chapter 8: Parameter Learning: A Convex Analytic Path
Abstract
8.1 Introduction
8.2 Convex Sets and Functions
8.3 Projections onto Convex Sets
8.4 Fundamental Theorem of Projections onto Convex Sets
8.5 A Parallel Version of POCS
8.6 From Convex Sets to Parameter Estimation and Machine Learning
8.7 Infinite Many Closed Convex Sets: The Online Learning Case
8.8 Constrained Learning
8.9 The Distributed APSM
8.10 Optimizing Nonsmooth Convex Cost Functions
8.11 Regret Analysis
8.12 Online Learning and Big Data Applications: A Discussion
8.13 Proximal Operators
8.14 Proximal Splitting Methods for Optimization
Problems
MATLAB Exercises
8.15 Appendix to Chapter 8
Chapter 9: Sparsity-Aware Learning: Concepts and Theoretical Foundations
Abstract
9.1 Introduction
9.2 Searching for a Norm
9.3 The Least Absolute Shrinkage and Selection Operator (LASSO)
9.4 Sparse Signal Representation
9.5 In Search of the Sparsest Solution
9.6 Uniqueness of the l0 Minimizer
9.7 Equivalence of l0 and l1 Minimizers: Sufficiency Conditions
9.8 Robust Sparse Signal Recovery from Noisy Measurements
9.9 Compressed Sensing: The Glory of Randomness
9.10 A Case Study: Image De-Noising
Problems
Chapter 10: Sparsity-Aware Learning: Algorithms and Applications
Abstract
10.1 Introduction
10.2 Sparsity-Promoting Algorithms
10.3 Variations on the Sparsity-Aware Theme
10.4 Online Sparsity-Promoting Algorithms
10.5 Learning Sparse Analysis Models
10.6 A Case Study: Time-Frequency Analysis
10.7 Appendix to Chapter 10: Some Hints from the Theory of Frames
Problems
Chapter 11: Learning in Reproducing Kernel Hilbert Spaces
Abstract
11.1 Introduction
11.2 Generalized Linear Models
11.3 Volterra, Wiener, and Hammerstein Models
11.4 Cover's Theorem: Capacity of a Space in Linear Dichotomies
11.5 Reproducing Kernel Hilbert Spaces
11.6 Representer Theorem
11.7 Kernel Ridge Regression
11.8 Support Vector Regression
11.9 Kernel Ridge Regression Revisited
11.10 Optimal Margin Classification: Support Vector Machines
11.11 Computational Considerations
11.12 Online Learning in RKHS