TIENE EN SU CESTA DE LA COMPRA
en total 0,00 €
For one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.
The long-anticipated revision of this best-selling text offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence.
I. Artificial Intelligence
1. Introduction
1.1 What is AI?
1.2 The Foundations of Artificial Intelligence
1.3 The History of Artificial Intelligence
1.4 The State of the Art
1.5 Summary, Bibliographical and Historical Notes, Exercises
2. Intelligent Agents
2.1 Agents and Environments
2.2 Good Behavior: The Concept of Rationality
2.3 The Nature of Environments
2.4 The Structure of Agents
2.5 Summary, Bibliographical and Historical Notes, Exercises
II. Problem-solving
3. Solving Problems by Searching
3.1 Problem-Solving Agents
3.2 Example Problems
3.3 Searching for Solutions
3.4 Uninformed Search Strategies
3.5 Informed (Heuristic) Search Strategies
3.6 Heuristic Functions
3.7 Summary, Bibliographical and Historical Notes, Exercises
4. Beyond Classical Search
4.1 Local Search Algorithms and Optimization Problems
4.2 Local Search in Continuous Spaces
4.3 Searching with Nondeterministic Actions
4.4 Searching with Partial Observations
4.5 Online Search Agents and Unknown Environments
4.6 Summary, Bibliographical and Historical Notes, Exercises
5. Adversarial Search
5.1 Games
5.2 Optimal Decisions in Games
5.3 Alpha-Beta Pruning
5.4 Imperfect Real-Time Decisions
5.5 Stochastic Games
5.6 Partially Observable Games
5.7 State-of-the-Art Game Programs
5.8 Alternative Approaches
5.9 Summary, Bibliographical and Historical Notes, Exercises
6. Constraint Satisfaction Problems
6.1 Defining Constraint Satisfaction Problems
6.2 Constraint Propagation: Inference in CSPs
6.3 Backtracking Search for CSPs
6.4 Local Search for CSPs
6.5 The Structure of Problems
6.6 Summary, Bibliographical and Historical Notes, Exercises
III. Knowledge, Reasoning, and Planning
7. Logical Agents
7.1 Knowledge-Based Agents
7.2 The Wumpus World
7.3 Logic
7.4 Propositional Logic: A Very Simple Logic
7.5 Propositional Theorem Proving
7.6 Effective Propositional Model Checking
7.7 Agents Based on Propositional Logic
7.8 Summary, Bibliographical and Historical Notes, Exercises
8. First-Order Logic
8.1 Representation Revisited
8.2 Syntax and Semantics of First-Order Logic
8.3 Using First-Order Logic
8.4 Knowledge Engineering in First-Order Logic
8.5 Summary, Bibliographical and Historical Notes, Exercises
9. Inference in First-Order Logic
9.1 Propositional vs. First-Order Inference
9.2 Unification and Lifting
9.3 Forward Chaining
9.4 Backward Chaining
9.5 Resolution
9.6 Summary, Bibliographical and Historical Notes, Exercises
10. Classical Planning
10.1 Definition of Classical Planning
10.2 Algorithms for Planning as State-Space Search
10.3 Planning Graphs
10.4 Other Classical Planning Approaches
10.5 Analysis of Planning Approaches
10.6 Summary, Bibliographical and Historical Notes, Exercises
11. Planning and Acting in the Real World
11.1 Time, Schedules, and Resources
11.2 Hierarchical Planning
11.3 Planning and Acting in Nondeterministic Domains
11.4 Multiagent Planning
11.5 Summary, Bibliographical and Historical Notes, Exercises
12 Knowledge Representation
12.1 Ontological Engineering
12.2 Categories and Objects
12.3 Events
12.4 Mental Events and Mental Objects
12.5 Reasoning Systems for Categories
12.6 Reasoning with Default Information
12.7 The Internet Shopping World
12.8 Summary, Bibliographical and Historical Notes, Exercises
IV. Uncertain Knowledge and Reasoning
13. Quantifying Uncertainty
13.1 Acting under Uncertainty
13.2 Basic Probability Notation
13.3 Inference Using Full Joint Distributions
13.4 Independence
13.5 Bayes' Rule and Its Use
13.6 The Wumpus World Revisited
13.7 Summary, Bibliographical and Historical Notes, Exercises
14. Probabilistic Reasoning
14.1 Representing Knowledge in an Uncertain Domain
14.2 The Semantics of Bayesian Networks
14.3 Efficient Representation of Conditional Distributions
14.4 Exact Inference in Bayesian Networks
14.5 Approximate Inference in Bayesian Networks
14.6 Relational and First-Order Probability Models
14.7 Other Approaches to Uncertain Reasoning
14.8 Summary, Bibliographical and Historical Notes, Exercises
15. Probabilistic Reasoning over Time
15.1 Time and Uncertainty
15.2 Inference in Temporal Models
15.3 Hidden Markov Models
15.4 Kalman Filters
15.5 Dynamic Bayesian Networks
15.6 Keeping Track of Many Objects
15.7 Summary, Bibliographical and Historical Notes, Exercises
16. Making Simple Decisions
16.1 Combining Beliefs and Desires under Uncertainty
16.2 The Basis of Utility Theory
16.3 Utility Functions
16.4 Multiattribute Utility Functions
16.5 Decision Networks
16.6 The Value of Information
16.7 Decision-Theoretic Expert Systems
16.8 Summary, Bibliographical and Historical Notes, Exercises
17. Making Complex Decisions
17.1 Sequential Decision Problems
17.2 Value Iteration
17.3 Policy Iteration
17.4 Partially Observable MDPs
17.5 Decisions with Multiple Agents: Game Theory
17.6 Mechanism Design
17.7 Summary, Bibliographical and Historical Notes, Exercises
V. Learning
18. Learning from Examples
18.1 Forms of Learning
18.2 Supervised Learning
18.3 Learning Decision Trees
18.4 Evaluating and Choosing the Best Hypothesis
18.5 The Theory of Learning
18.6 Regression and Classification with Linear Models
18.7 Artificial Neural Networks
18.8 Nonparametric Models
18.9 Support Vector Machines
18.10 Ensemble Learning
18.11 Practical Machine Learning
18.12 Summary, Bibliographical and Historical Notes, Exercises
19. Knowledge in Learning
19.1 A Logical Formulation of Learning
19.2 Knowledge in Learning
19.3 Explanation-Based Learning
19.4 Learning Using Relevance Information
19.5 Inductive Logic Programming
19.6 Summary, Bibliographical and Historical Notes, Exercises
20. Learning Probabilistic Models
20.1 Statistical Learning
20.2 Learning with Complete Data
20.3 Learning with Hidden Variables: The EM Algorithm
20.4 Summary, Bibliographical and Historical Notes, Exercises
21. Reinforcement Learning
21.1 Introduction
21.2 Passive Reinforcement Learning
21.3 Active Reinforcement Learning
21.4 Generalization in Reinforcement Learning
21.5 Policy Search
21.6 Applications of Reinforcement Learning
21.7 Summary, Bibliographical and Historical Notes, Exercises
VI. Communicating, Perceiving, and Acting
22. Natural Language Processing
22.1 Language Models
22.2 Text Classification
22.3 Information Retri