书籍详情

人工智能一种现代的方法

人工智能一种现代的方法

作者:(美)罗素(Russell, S.), (美)诺维格(Norvig, P.)著

出版社:清华大学出版社

出版时间:2006-05-01

ISBN:9787302128298

定价:¥128.00

购买这本书可以去
内容简介
本书被全世界89个国家的900多所大学用作教材。 本书以详尽和丰富的资料,从理性智能体的角度,全面阐述了人工智能领域的核心内容,并深入介绍了各个主要的研究方向。全书分为8大部分:第一部分"人工智能",第二部分"问题求解",第三部分"知识与推理",第四部分"规划",第五部分"不确定知识与推理",第六部分"学习",第七部分"通信、感知与行动",第八部分"结论"。本书既详细介绍了人工智能的基本概念、思想和算法,还描述了其各个研究方向最前沿的进展,同时收集整理了详实的历史文献与事件。另外,本书的配套网址http://aima.cs-berkeley.edu/为教师和学生提供了大量教学和学习资料。 本书适合于不同层次和领域的研究人员及学生,是高等院校本科生和研究生人工智能课的首选教材,也是相关领域的科研与工程技术人员的重要参考书。
作者简介
  Stuart Russell,was born in 1962 in Portsmouth,England.He received his B.A.with first-class hon-ours in physics from Oxford Undiversity in 1982,and his Ph,D.in computer science from Stanford in 1986.He then joined the faculty of the University of California at Berkeley,where he is a professor of computer science,director of the Center for Intelligent Systems,and holder of the Smith-Zadeh Chair in Engineering.In 1990,he received the Presidential Young Investigator Award of the National Science Foundation,and in 1995 he was cowinner of the Computers and Thought Award.He was a 1996Miller Professor of the University of California and was appointed to a Chancellor s Professor ship in 2000.In 1998,he gave the Forsythe Memorial Lectures at Stanford University He is a Fellow and former Executive Council member of the American Association for Artificial Intelligence.He has published over 100 papers on a wide range of topics in artificial intelligence.His other books include The Use of Knowledge in Analogy and Induction and (with Eric Wefald)Do the Right Thing:Studies in Limited Rationality.
目录
SummarV of Contents
J of Contents
I Artificial intelligence
1 Introduction 1
2 IntelligentAgents 32
11 Problem-solving
3 Solving Problems by Searching 59
4 Informed Search and Exploration 94
5 Constraint Satisfaction Problems 137
6 Adversarial Search 161
Ill Knowledge and reasoning
7 Logical Agents 194
8 First-Order Logic 240
9 Inference in First-Order Logic 272
10 Knowledge Representation 320
IV Planning
11 Planning 375
12 Planning and Acting in the Real WOrld 417
V Uncertain knowledge and reasoning
13 Uncertainty 462
14 Probabilistic Reasoning 492
15 Probabilistic Reasoning over Time 537
16 Making Simple Decisions 584
17 Making Complex Decisions 613
VI Learning
18 Learning from Observations 649
19 Knowledge in Learning 678
20 Statistical Learning Methods -- -- 712
21 Reinforcement Learning 763
Vll Communicating, perceiving, and acting
22 Communication 790
23 Probabilistic Language Processing 834
24 Perception 863
25 Robottes 901
Vlll ConCluSionS
26 Philosophical Foundations 947
27 Al: Present and Future 968
A Mathematical background 977
B Notes on Languages and Algorithms. 984
Bibliography 987
Index 1045
...
xIII
Contents
I Artificial intelligence
1 Introduction 1
l.l What is Al? I
Acting humanly f The Turing Test approach 2
o j: foe Turing Test approach 2
Thinking humanly f The cognitive modeling approach 3
e humanly f The cognitive modeling approach 3
Thinking rationally f The "laws of thought" approach 4
e lationallyf The "laws of thought" approach 4
Acting rationally; The rational agent approach 4
o J: foe rational agent approach 4
l.2 The Foundations of Artificial intelligence 5
o
Philosophy (428 B .C.--present) 5
Mathematics (c. 800--present) 7
Economics (1776--present) 9
Neuroscience (1861--present) 10
Psychology (1879--present) 12
Computer engineering (1940--preseflt) 14
Control theory and CVbernetics (1948--present) 15
j and Cybernetics (1948--present) 15
Linguistics (1957--present) 16
.uistics (1957--present) 16
l.3 The HistorV of Artificial intelligence 16
J 6
The gestation of artificial intelligence (1943--1955) 16
o u
The birth of artificial intelligence (1956) 17
dance (1956) 17
Early enthusiasm. great expectations (1952--1969) 18
J, great expectations (1952--1969) 18
A dose of realitV (1966--1973) 21
.
Knowledge--based systems f The key to power? (1969--1979) 22
o J >terns; rhe key to power? (1969--1979) 22
Al becomes an industrV (1980--present) 24
.
The return of neural networks (1986--present) 25
Al becomes a science (1987--present) 25
The emergence of intelligent agents (1995--present) 27
6
1.4 The State of the Art 27
1.5 SummarV 28
y -8
Biblioeraphical and Historical Notes 29
.laphical and Historical Notes 29
Exercises 30
2 Intelligent Agents 32
2.1 Agents and Environments 32
5
2.2 Good Behavior f The Concept of Rationality 34
pt of Rationality 34
Performance measures 35
Rationalitv 35
j ac
Omniscience, learning, and autonomy 36
2.3 The Nature of Environments 38
Specifying the task environment 38
Properties of task environments 40
2.4 The Structure of Agents 44
u
Agent programs 44
Simple reflex agents 46
Model-based reflex agents 48
to
XV
.
xvi Contents
Goal-based agents 49
6 fi
Utility-based aZents FI
j based agents sl
LearninZ agents sl
o agents sl
1 < q
2'5 Summary sa
J 34
Biblioaraphical and Historical Notes 55
.laphical and Historical Notes 55
Exercises 56
36
11 Problem-solving
3 Solving Problems by Searching 59
,, n.
3'l Problem-Solving Agents 59
e agents 59
Well-defined problems and solutions 62
Formulating problems 62
o problems 62
ac ry v, r
3.2 Example Problems 64
m hi
Yoy problems 64
j problems 64
Real-world problems 67
ry ry Q 1.
3'3 Searching for Solutions 69
o
MeasurinZ Droblem-solvinZ Derformance 71
o problem-solving performance 71
ac 4 T T.
3'4 Uninformed Search Strategies 73
ales 73
Breadth-first search 73
Depth-first search 75
Depth-limited search 77
Iterative deepening depth-first search 78
Bidirectional search 79
Comparing uninformed search strategies sl
paring uninformed search strategies sl
ac F'.'..
3'5 AvoidinZ Repeated States FI
o Repeated States sl
ac
3'6 Searching with Partial information 83
o
c less Droblems 84
oensorless problems 84
problems 84
ContinZencV Droblems 86
o: problems 86
ac
3'7 Summary R7
J 87
Bibliographical alld Historical Notes 88
.laphical alld Historical Notes 88
Exercises 89
4 Informed Search and Exploration 94
4.1 Informed (Heuristic) Search Strategies 94
GreedV best-first search QS
J best-hrst search 95
A* searchs Minimizing the total estimated solution cost 97
o
MemorV-bounded heuristic search 1 of
. 101
Learning to search better 104
e LO search better 104
4.2 Heuristic Functions 105
ac rs
1 he effect of heuristic accuracy on performance 106
y on performance 106
InventinZ admissible heuristic functions 107
o admissible heuristic functions 107
Learning heuristics from experience 109
o perience 109
4.3 Local Search A12orithms and Optimization Problems 110
o ptimization Problems 1 10
Hill-climbinZ search 1 1 1
o >earch 1 11
Rimllloted annealings R h 1 1 F
simulated annealing search 1 15
e .earch 1 15
Local beam search 1 15
Genetic algorithms 1 16
b 1 16
4.4 Local Search in Continuous Spaces 1 19
paces 1 19
Contents xvii
4.5 Online Search Agents and Unknown Environments 122
cents and Unknown Environments 122
Online search problems 123
Online search agents 125
6
Online local search 126
Learning in online search 127
o in online search 127
4.6 Summary 129
J
Bibliographical and Historical Notes 130
.laphical and Historical Notes 130
Exercises 134
5 Constraint Satisfaction Problems 137
5. I Constraint Satisfaction Problems 137
5.2 Backtracking Search for CSPs 141
to
Variable and value ordering 143
Propagating information through constraints 144
IntelliZent backtracking: looking backward 148
o e: looking backward 148
5.3 Local Search for Constraint Satisfaction Problems 150
5.4 The Structure of Problems 151
5.5 SummarV 155
j
Biblioeraphical and Historical Notes 156
.laphical and Historical Notes 156
Exercises 158
6 Adversarial Search 161
6. 1 Games 161
6.2 Optimal Decisions in Games 162
ptimal Decisions in Games 162
Optimal strategies 163
mh.. 1.
foe minimax algorithm 165
.orithm 165
Optimal decisions in multiplayer games 165
6.3 Alpha--Beta Pruning 167
pha--Beta Pruning 167
6.4 Imperfect, Real-Time Decisions 171
Evaluation functions 171
CuttinZ off search 173
o
6.5 Games That include an Element of Chance 175
Position evaluation in games with chance nodes 177
earnes with chance nodes 177
Complexity of expectiminimax 177
pie-city of expectiminimax 177
Card games 179
earnes 179
6.6 State-of--the-Art Game Programs 180
6.7 Discussion 183
6.8 SummarV 185
J
Bibliouraphical and Historical Notes 186
.laphical and Historical Notes 186
Exercises 189
Ill Knowledge and reasoning
7 Logical Agents 194
- 1 T'
7.1 KnowledZe-Based AZents 195
ac o
- ry m, IT'
7.2 The Wumpus WOrld 197
~ ac T.
7.3 Logic 200
o
~' n
7.4 Propositional Logic f A Very Simple Logic 204
ac/ntax 204
oVntax 204
.
...
xvill Contents
semantics 206
oemantlcs 206
A simple knowledge base 208
Inference 208
Equivalence, validity, and satisfiability 210
- <,
7.5 Reasoning Patterns in Propositional Logic ZI I
o positional Logic ZI I
Resolution 213
Forward and backward chaininZ 217
u
7.6 Effective propositional inference 220
A complete backtracking algorithm 221
Local-search aIZorithms 222
o
Hard satisllabilitV Problems 224
. I
7.7 Agents Based on Propositional Logic 225
o positional Logic 225
FindinZ Dits and wumDUses using logical inference 225
o pits and wumpuses using logical inference 225
Keeping track of location and orientation 227
Circuit-based agents 227
o
A comparison 231
- Q
7.8 Summary 232
J
Bibliographical and Historical Notes 233
u I
Exercises 236
8 First-Order Logic 240
8.1 Representation Revisited 240
8.2 SVntax and Semantics of First--Order LoZic 245
J olc 245
Models for first-order logic 245
q-c/mbols and internretations 246
symbols and internretations 246
J pretations 246
m 1 4 R
germs 248
Atomic sentences 248
Complex sentences 249
Quantifiers 249
Equality 253
8.3 Using First-Order Logic 253
o first-Order Logic 253
Assertions and queries in first-order logic. 253
ac 1. h. l.,F 4
foe kinship domain 254
Numbers, sets, and lists 256
ac 1 J ry5R
foe wumpus world -- - - -- - -- - - - 258
pusworld ~ ~ ~ ~ ~ ~ - - - 258
8.4 Knowledge Engineering in First-Order Logic 260
al 1 1 1..,A 1
foe knowledZe enZineering process 261
o .ineering process 261
ac 1...
Yhe electronic circuits domain 262
8.5 Summary 266
J
Bibliographical and Historical Notes 267
u
Exercises 268
9 Inference in First-Order Logic 272
9.1 Propositional vs. First--Order inference 272
Inference rules for Quantifiers 273
1
Reduction to propositional inference 274
9.2 Unification and Lifting 275
A first--order inference rule 275
Unification 276
Contents xix
ctoracre and retrieval 97o
storage and retrieval 97R
6 - / 8
9'3 Forward Chaining 280
First-order definite clauses 280
A simple forward-chaining algorithm 281
'
Efficient forward chaining 283
5 283
9'4 Backward Chaining 287
5 287
A backward chaining algorithm 287
o
Logic proZramming ado
.ic programming 289
Efficient imDlementation of logic proZrams 9Q0
plementation of logic programs 290
Redundant inference and infinite loops 292
Constraint logic proZramminZ,Od
o t .ramming 294
9'5 Resolution,oF
-as
Conjunctive normal form for first-order loZic 7Q5
,unctive normal form for first-order logic 295
ac 1.
foe resolution inference rule 9Q7
-y 7
Example proofS 297
'
Completeness of resolution 300
'
Dealing with equality 7flq
a with equality 303
Resolution stfategies 304
ales 304
ac
theorem provers 306
'
9'6 Summary q 10
j 310
Bibliographical and Historical Notes 310
J 10
Exercises 315
10 Knowledge Representation 320
10'l Ontological Engineering 320
10'2 Categories and Objects 7ry,
a sects 322
PhVsical comDosition qry4
,aleal composition 324
Measurements 325
,25
Q'lhstances and oh j
substances and objects 327
Jects 327
10.3 Actions, Situations, and Events 328
,28
FI
Yhe ontology of situation calculus 329
by of situation calculus 329
Describing actions in situation calculus 330
e actions in situation calculus 330
Q l-c!iri or the reprpoRorit"I
solving the representational frame problem qqry
e presentational frame problem 332
c l-c!iricr the inferential frame Dwnk
solving the inferential frame problem aam
e the inferential frame problem 333
m.,
nine and event calculus 234
J34
Generalized events 315
J35
Processes 337
Intervals 338
Fluents and obiects 719
,Gets 339
10.4 Mental Events and Mental Objects 341
J J41
A formal theory of beliefs 141
j J41
Knowledge and belief 343
5 a43
Knowledge, time, and action 344
6,.
10.5 The internet Shopping World 344
pping World 344
Comparing offers 348
10.6 Reasoning Systems for Categories 349
e systems for Categories 349
R.
oemantlc networks 150
J30
Description logics 353
ption logics 353
10.7 Reasoning with Default information 1<4
o J34
xx Contents
Open and closed worlds 354
Negation as failure and stable model semantics 356
Circumscription and default logic 358
10.8 Truth Maintenance Systems 360
10.9 Summary 362
Bibliographical and Historical Notes 363
Exercises 369
IV Planning
11 Planning 375
11.1 The Planning Problem 375
The language of planning problems 377
Expressiveness and extensions 378
Example: Air cargo transport 380
Exampled The spare tire problem 381
Example: The blocks world 381
l l.2 Planning with State-Space Search 382
Forward state-space search 382
Backward state-space search 384
Heuristics for state-space search 386
l 1.3 Partial-Order Planning 387
A partial-order planning example 391
l planning example 391
Partial-order alanning with unbound variables 393
planning with unbound variables 393
Heuristics for partial-order planning 394
l l.4 Planning Graphs 395
Planning graphs for heuristic estimation 397
ac
foe GRAPHPLAN algorithm 398
b
m..
Fermination of GRAPHPLAN 401
11.5 Planning with Propositional Logic 402
Describing planning problems in propositional logic 402
Complexity of propositional encodings 405
l l.6 AnalVsis of Planning Approaches 407
J .is of Planning Approaches 407
l l.7 Summary 408
Bibliographical and Historical Notes 409
Exercises 412
12 Planning and Acting in the Real World 417
12.1 Time, Schedules, and Resources 417
c hedulinZ with resource constraints 420
scheduling with resource constraints 420
o kith resource constraints 420
12.2 Hierarchical Task Network Planning 422
ReDresenting action decompositions 423
presenting action decompositions 423
ModifVing the planner for decompositions 425
ding the planner for decompositions 425
Discussion 427
12.3 Planning and Acting in Nondeterministic Domains 430
12.4 Conditional Planning 433
Conditional planning in fully observable environments 433
Conditional planning in partially observable environments 437
12.5 Execution Monitoring and Replanning 441

Contents xxi
12.6 Continuous Planning 445
12.7 MultiAgentplanning 449
5 o f49
Cooperation: Joint goals and plans 450
MultibodV planning 451
y planning 451
Coordination mechanisms 452
Competition 454
12.8 SummarV 454
J f34
Bibliographical and Historical Notes 455
Exercises 459
V Uncertain knowledge and reasoning
13 Uncertainty 462
13.1 Acting under Uncertainty 462
Handling uncertain knowledge 463
e uncertain knowledge 463
UncertaintV and rational decisions 465
J and rational decisions 465
Design for a decision-theoretic agent 466
an for a decision-theoretic agent 466
13.2 Basic ProbabilitV Notation 466
J 1
Propositions 467
Atomic events 468
Prior probability 468
Conditional probability 470
13.3 The Axioms ofprobabilitV 471
J [ 71
Using the axioms of probability 473
WhV the axioms of probability are reasonable 473
J the axioms of probability are reasonable 473
13.4 Inference Using Full Joint Distributions. 475
13.5 Independence 477
pendence 477
13.6 BaVes' Rule and its Use 479
bes' Rule and its Use 479
Applying Bayes' rule: The simple case 480
Using Bayes' ale: Combining evidence 481
13.7 The Wumpus World Revisited 483
13.8 SummarV 486
j
Bibliogranhical and Historical Notes 487
.raphical and Historical Notes 487
Exercises 489
14 Probabilistic Reasoning 492
14.1 Representing Knowledge in an Uncertain Domain 492
14.2 The Semantics of Bavesian Networks 495
J
Representing the full joint distribution 495
t e the full joint distribution 495
Conditional independence relations in Bayesian networks 499
14.3 Efficient Representation of Conditional Distributions 500
14.4 Exact inference in Bavesian Networks 504
j
Inference by enumeration 504
J
ac. hi liar..
Fhe variable elimination algorithm 507
o
ac 1.
she complexity of exact inference 509
Clustering algorithms 510
14.5 Approximate inference in Bayesian Networks sl I
Direct sampling methods sl I
piing methods sl I
Inference by Markov chain simulation 516
j .Aarkov chain simulation 516

..
xxll Contents
14.6 Extending Probability to First-Order Representations 519
14.7 Other Approaches to Uncertain Reasoning 523
Rule-based methods for uncertain reasoning 524
Representing ignorance: Dempster--Shafer theory 525
Representing vagueness: Fuzzy sets and fuzzy logic 526
14.8 Summary 528
J
Bibliographical and Historical Notes 528
Exercises 533
15 Probabilistic Reasoning over Time 537
15.1 Time and Uncertainty 537
j J37
abates and observations 538
states and observations 538
ctotionanr rirocesses and the Markov assumotion 538
stationals processes and the Markov assumption 538
J processes and the Markov assumption 538
15.2 Inference in Temporal Models 541
Filtering and prediction 542
smoothing 544
smoothing 544
6
Finding the most likely sequence 547
15.3 Hidden Markov Models 549
Rimrilified matrix alaorithms 549
simplified matrix algorithms 549
15.4 Kalman Filters 551
Updating Gaussian distributions 553
A simple one-dimensional example 554
ac I <<
Fhe general case 556
o
Applicability of Kalman filtering 557
15.5 Dvnandc Bavesian Networks 559
j namic Bayesian Networks 559
ConstrUcting DBNs 560
Exact inference in DBNs 563
Approximate inference in DBNs 565
15.6 Speech Recognition 568
beseech sounds 570
breech sounds 570
peech sounds 570
WOrds 572
Rrentences 574
sentences 574
Building a speech recognizer 576
15.7 SummarV 578
J J78
Bibliographical and Historical Notes 578
Exercises 581
16 Making Simple Decisions 584
16.1 Combining Beliefs and Desires under Uncertainty 584
o Beliefs and Desires under Uncertainty 584
16.2 The Basis of UtilitV Theory 586
J theory 586
Constraints on rational preferences 586
And then there was Utility 588
y a88
16.3 Utility Functions 589
J runctlons 589
ac 'liar r r yi < CO
she utility of money 589
j J J89
UtilitV scales and utilitV assessment 591
J .cafes and utility assessment 591
16.4 Multiattribute Utility Functions 593
; functions 593
Dominance 594
Preference strUcture and multiattribute utilitV 596
J J96
16.5 Decision Networks 597

Contents xxiii
Representing a decision problem with a decision network 598
Evaluating decision networks 599
16.6 The Value of information 600
A simple example 600
pie example 600
A general formula 601
o
Properties of the value of information 602
Implementing an information-gathering agent 603
16.7 Decision-Theoretic Expert Systems 604
16.8 Summary 607
J
Bibliographical and Historical Notes 607
Exercises 609
17 Making Complex Decisions 613
17.1 Sequential Decision Problems 613
An example 613
OntimalitV in seQuential decision problems 616
ptimality in sequential decision problems 616
17.2 Value iteration 618
Utilities of states 619
ac 1,. o i+. 1.
foe value iteration algorithm 620
o
Convergence of value iteration 620
6
17.3 Policy iteration 624
J iteration 624
17.4 PartiallV observable MDPs 625
.
17.5 Decision-Theoretic Agents 629
6
17.6 Decisions with Multiple Agents: Game Theory 631
pie Agents: Game Theory 631
17.7 MechanismDesign 640
17.8 SummarV 643
J o43
Bibliographical and Historical Notes 644
.laphical and Historical Notes 644
Exercises 646
VI Learning
18 Learning from Observations 649
18.1 Forms of Learning 649
o
18.2 Inductive LearninZ 651
o
18.3 Learning Decision Trees 653
Decision trees as performance elements 653
Expressiveness of decision trees 655
Inducing decision trees from examples 655
e pies 655
Choosing attribute tests 659
Assessing the performance of the learning algorithm 660
Noise and overfitting 661
Broadening the applicability of decision trees 663
18.4 Ensemble Learning 664
o
18.5 Why Learning WOrksi ComDutational Learning Theory 668
J Learning Whrksf Computational Learning Theory 668
How manV examples are needed? 669
J examples are needed? 669
Learning decision lists 670
Discussion 672
18.6 SummarV 673
J o73
BiblioaraDhical and Historical Notes 674
.laphical and Historical Notes 674
.
xxlv Contents
Exercises 676
19 Knowledge in Learning 678
19.1 A Logical Formulation of Learning 678
.ical Formulation of Learning 678
Examples and hypotheses 678
pies and hypotheses 678
Current-best-hVpothesis search 680
Jpothesis search 680
Least-commitment search 683
19.2 Knowledge in Learning 686
6 6
c. ie examDles 687
come simple examples 687
q I schemes 688
come general schemes 688
o
19.3 Explanation-Based Learning 690
planation-Based Learning 690
Extracting general rules from examples 691
o e pies 691
Improving efficiency 693
19.4 Learning Using Relevance information 694
Determining the hypothesis space 695
Learning and using relevance information 695
u
19.5 Inductive LoZic Programming 697
.ic Programming 697
An example 699
m 1. 1..
fop-down inductive learning methods 701
Inductive learning with inverse deduction 703
e with inverse deduction 703
Making discoveries with inductive logic programming 705
o
19.6 SummarV 707
.
Bibliographical and Historical Notes 708
.raphical and Historical Notes 708
Exercises 710
20 Statistical Learning Methods 712
run 1 Q'.. 1 r. ac 1 ry
20. I Statistical LearninZ 712
O 1 12
.n ry T.. 1
20.2 Learning with Complete Data 716
e with Complete Data 716
Maximum-likelihood parameter learning f Discrete models 716
Naive BaVes models 718
J
Maximum-likelihood parameter learning f Continuous models 719
Bavesian Darameter learning 720
,c>ian parameter learning 720
Learning Bayes net structures 722
& Bayes net structures 722
ac abeT"IT T '.11 T',,m '~ 'l
20.3 Learning with Hidden Variables f The EM Algorithm 724
o
Unsupervised clustering f Learning mixtures of Gaussians 725
Learning Bayesian networks with hidden variables 727
Learning hidden Markov models 731
ac 1 f c
foe general form of the EM algorithm 731
5 o
Learning Bayes net structures with hidden variables 732
In 4 T
20.4 Instance-Based Learning 733
o
Nearest-neighbor models 733
Kernel models 735
.n c Neural Networks 736
20.5 Neural Networks 736
Units in neural networks 737
Network structures 738
finale layer feed-forward neural networks (perceptrons) 740
finale layer feed-forward neural networks (perceptrons) 740
.ie layer feed-forward neural networks (perceptrons) 740
Multilaver feed-forward neural networks 744
J or feed-forward neural networks 744
Learning neural network structures 748
e lleural network structures 748
.n
20.6 Kernel Machines 749
Contents xxv
, n ac n Q, 1 T T 1.
20.7 Case StudVf Handwritten Digit Recognition 752
J: Handwritten Digit Recognition 752
rvri o Q ryF'
20.8 Summary 754
J
Bibliographical and Historical Notes 755
.raphical and Historical Notes 755
Exercises 759
21 Reinforcement Learning 763
1 1 1 T
21.1 Introduction 763
1 1 ry n.. T,. r'
21.2 Passive Reinforcement LearninZ 765
o
Direct utilitV estimation 766
y estimation 766
Adaptive dynamic programming 767
m 1 aide 1. 7A7
femporal difference learning 767
, 1,..
ZI .3 Active Reinforcement Learning 771
o
Exploration 771
Learning an Action-Value Function 775
, 1 4
ZI .4 Generalization in Reinforcement Learning 777
o I 1 7
Applications to game-playing 780
l plications to game-playing 780
Application to robot control 780
pplication to robot control 780
1 1 < ac 1. Q, ado 1
21.5 PolicV Search 781
j search 781
1 1
21.6 SummarV 784
, 184
Bibliographical and Historical Notes 785
.laphical and Historical Notes 785
Exercises 788
Vll Communicatillg, perceiving, and acting
22 Communication 790
ac 1 1
22.1 Communication as Action 790
Fundamentals of lanZuaQe 791
ouage 791
ac
she component steps of communication 792
1, 9 ^ v 1 n c v
22.2 A Formal Grammar for a Fragment of English 795
o
ac T'. c C 7O'
foe Lexicon of SO 795
ac
she Grammar of SO 796
11 ? Q.
22.3 Syntactic AnalVsis (Parsing) 798
J J>is (Parsing) 798
Efficient parsing 800
,1 A 4 1
22.4 Augmented Grammars 806
.Inented Grammars 806
Verb subcategorization 808
5
Generative capacity of augmented grammars 809
fbi < 9.
22.5 Semantic interpretation 810
pretation 810
ac.
foe semantics of an English fragment sl 1
.iish fragment sl 1
fi~ 1 f R 1,
dime and tense 812
Quantification 813
Pragmatic interpretation 815
ematlc interpretation 815
Language generation with DCGs 817
o
1,
22.6 Ambiguity and Disambiguation 818
.uity and Disambiguation 818
Disambiguation 820
1 O 7 ac. T T, 1. OI 1
22.7 Discourse Understanding 821
o
Reference resolution 821
ac
foe structure of coherent discourse 823
1 1 o rs. x 1.
22.8 Grammar induction 824
,, Q Q cry
22.9 Summary 826
J 826
.
xxvi Contents
Bibliographical and Historical Notes 827
Exercises 831
23 Probabilistic Language Processing 834
17 1 n 1' '1.. T A X, 1 R, 4
23.1 Probabilistic Language Models 834
buage Models 834
Probabilistic context-free grammars 836
grammars 836
Learning probabilities for PCFGs 839
e probabilities for PCFGs 839
Learning rule strUcture for PCFGs 840
23.2 Information Retrieval 840
Evaluating iR systems 842
IR refinements 844
Presentation of result sets 845
Implementing iR systems 846
ry ac ac T C.
23.3 Information Extraction 848
17' A' 1. m 1.
23.4 Machine Translation 850
Machine translation systems 852
;items 852
egotistical machine translation 853
statistical machine translation 853
LearninZ probabilities for machine translation 856
e probabilities for machine translation 856
aam < q o'ry
23.5 SummarV 857
j
Biblioeraphical and Historical Notes 858
.raphical and Historical Notes 858
Exercises 861
24 Perception 863
ac 4 1 T'
24.1 Introduction 863
ry 4 1 T v.
24.2 Image Formation 865
o
Images without lenses f the pinhole camera 865
Lens sVstems 866
J>tems 866
Light: the photometry of image formation 867
.lit: the photometry of image formation 867
Color: the spectrophotometry of image formation 868
1 4 ac V 1 T n...
24.3 EarlV Image Processing Operations 869
j - e e perations 869
Edge detection 870
o
Image segmentation 872
o .Inentatlon 872
ry
24.4 ExtractinZ Three-Dimensional information 873
e three-Dimensional information 873
Motion 875
Binocular stereopsis 876
m h
Fexture gradients 879
.radients 879
QhodinZ 880
chading 880
6
Contour 881
ry 4 <
24.5 Object Recognition 885
,ect Recognition 885
Brightness-based recognition 887
.litness-based recognition 887
Feature-based recognition 888
Pose Estimation 890
ac 4
24.6 Using Vision for Manipulation and Navigation 892
o vision for Manipulation and Navigation 892
ac 4 ~ q OO'
24.7 SummarV 894
J 894
Bibliographical and Historical Notes 895
.laphical and Historical Notes 895
Exercises 898
25 Robottes 901
1 < 1 T
25.1 Intfoduction 901
Contents xxvii
,' ry n 1
25.2 Robot Hardware 903
9 Q03
censors 903
Effectors 904
1 < ry n 1.
25.3 Robotic Perception 907
Localization 908
Mapping 913
Other tVves of DerceDtion 915
apes of perception 915
ry <' of.,
25.4 Planning to Move 916
e LO Move 916
Configuration space 916
Cell decomposition methods 919
qkeletonization methods 922
okeletonization methods 922
1< F of..
25.5 Planning uncertain movements 923
o
Robust methods 924
, <
25.6 Moving 926
6 y26
Dynamics and control 927
ynamlcs and control 927
Potential field control 929
Reactive control 930
,< ac n,.
25.7 Robotic Software Architectures 932
c',hsumDtion architecture 932
subsumption architecture 932
aam 1 ac k.
Yhree-laver architecture 933
yer architecture 933
Robotic programming languages 934
, F o ^,.'.'
25.8 Application Domains 935
Ic O 9' riryR
25.9 Summary 938
y y38
Bibliographical and Historical Notes 939
Exercises 942
Vlll Conclusions
26 Philosophical Foundations 947
1
26.1 Weak Al: Can Machines Act intelligently? 947
o J
ac
foe argument from disability 948
wument from disability 948
ac k. 1 k..
foe mathematical obiection 949
J
ac
foe argument from informalitv 950
b j y50
,< 1 q, 4 T
26.2 Strong Al: Can Machines Really Think? 952
s Al: Can Machines Really Think? 952
ac. I k lxr ac hi O< 4
Yhe band--bodV problem 954
J problem 954
ac "k..,,.
foe "brain in a vat" experiment 955
ac h..
she brain prosthesis experiment 956
ac
foe Chinese room 958
la ry al v,
26.3 The Ethics and Risks of Developing Artificial intelligence 960
ping Artificial intelligence 960
26.4 Summary 964
y y64
Bibliographical and Historical Notes 964
Exercises 967
27 Al: Present and Future 968
rvry 1 ^
27.1 Agent Components 968
cent Components 968
Iap 1 ^
27.2 Agent Architectures 970
6
aam ac 4 11 r
27.3 Are We Going in the Right Direction? 972
b in the Right Direction? 972
...
xxvill Contents
fry 4 11 rl. c' T ac 9, ic Oaf
27.4 What if Al Does Succeed? 974
A Mathematical background 977
A.l Complexity Analysis and OO Notation 977
AsvmDtotic analVsis 977
J ptotic analysis 977
NP and inherentlV hard problems 978
. I
A.2 Vectors, Matrices, and Linear Algebra 979
A.3 Probability Distributions 981
J
Bibliographical and Historical Notes 983
B Notes on Languages and Algorithms 984
B.l Defining Languages with Backus--Naur Form (BNF) 984
B.2 Describing Algorithms with Pseudocode 985
B.3 Online Help 985
Bibliography 987
Index 1045
前言:
序  言
《人工智能:一种现代方法》(英文,第1版)是一部人工智能(AI)的优秀教材,1995年问世后在世界各地出版、发行,很快成为一部畅销书。在我国,英文影印本于2002年出版,也很受大众的欢迎。
事隔8年(2003年),该书的第2版(作者Stuart Russell, Peter Norvig, Pearson Education出版集团出版)又出现在我们面前。作者是这样解释出版新书的原因,他说,自1995年该书第1版发行以来,AI有了很大的变化,它的技术更趋实用,因此新书的每一章都经过重写以反映该领域的最新成就,同时重新解释了原有的结果,使之更加符合新的发现。这充分反映了作者对新书的负责精神与严肃态度。
这部书的主要特点如下。
(1)在智能Agent(自主体,代理,行为者)的概念下,将AI中相互分离的领域统一起来,克服了以往AI教材中难以避免的内容零散且互不相关的现象,从而使AI变得更加理论化、系统化。
(2)理论与实际并重。作者在论述各个领域的原理与方法时,尽量运用数学(形式化)的语言,力图让它们建立在严格的理论基础之上。同时又介绍最新的实用算法,特别是能够解决现实世界问题的方法,尽量使AI从“玩具世界”中走向实用。
本书所具有的以上特点正好反映了AI当前发展的大趋势。大家知道,人工智能从上个世纪中叶诞生以来,一直未能形成系统的理论体系。因此,有的人把AI看成是一门“工程”,有的则认为是一门“技术”,也有的甚至认为只是一门“艺术”。大家也许记得,上个世纪80年代,以斯坦福大学的N. J. Nilsson为代表与以耶鲁大学 R. C. Schank为代表,曾经展开过一场关于AI究竟是一门“工程技术”,还是一门“艺术”的争论。当时存在这种争论是很自然的。在AI发展的初期,大多数研究者采取的研究方法是,首先凭借直觉或者启发式建立起AI的相关假设,然后在“玩具世界”中论证假设的合理性,由此建立起一套AI的理论与方法。为了克服数学方法的“局限性”,他们总是避免使用数学工具,尽量与传统的严格科学保持距离。但是,随着AI走向成熟,AI的“传统”发生了变化,它们逐步向科学靠拢,向实用靠近。一方面,尽量使用现代科学工具,使AI逐步变成一门科学。一方面,尽量面向现实世界,提出可行的算法,使AI走向实际应用。该书作者将这两大趋势及时地反映在教材中,从而形成自己的特色。
作者在前言中特别说明了本书与第1版的区别。我认为,第2版加强并进一步突出了以上两个特点。作者在重写过程中,对于理论部分尽量采用已有的、成熟的科学方法,如数学、心理学、计算机工程、神经科学等。在第2版中,还进一步强调了经济学中决策理论、运筹学,以及控制理论、控制论等与AI的关系,将AI与其它科学领域联系起来。在各个章节中都补充了新的内容,将这期间所取得的研究成果尽量纳入到新书中。
尽管本书已由清华大学计算机系姜哲等老师译为中文,并于2004年出版,为了使读者能够读到英文原书,我以为出版第2版的影印本还是有必要的。
中国科学院院士 清华大学教授

Preface
Artificial intelligence (Al) is a big field, and this is a big book. We have tried to explore the full
breadth of the field, which encompasses logic, probability, and continuous mathematics; perception,
. 1. 1., 1.
reasoning, learning, and action; and everything from microelectronic devices to robotic planetary
e, learning, and action; and everything from microelectronic devices to robotic planetary
explorers. The book is also big because we go into some depth in presenting results, although we
.
strive to cover only the most central ideas in the main part of each chapter. Pointers are given to
J Lhe most central ideas in the main part of each chapter. Pointers are given to
fufther results in the biblioZraDhical notes at the end of each chaDter.
.laphical notes at the end of each chapter.
The subtitle of this book is ''A Modern Approach." The intended meaning of this rather empty
phrase is that we have tried to synthesize what is now known into a common framework, rather than
.
trying to explain each subfield of Al in its own historical context. We apologize to those whose
subfields are, as a result, less recognizable than they might otherwise have been.
The main unifying theme is the idea of an intelligent agent. We define Al as the study of
;lug theme is the idea of an intelligent agent. We define Al as the study of
agents that receive percepts from the environment and perform actions. Each such agent implements a
function that maps percept sequences to actions, and we cover different ways to represent these
func.
nons, such as production systems, reactive agents, real-time conditional planners, neural networks,
and decision-theoretic systems. We eXDlain the role of learning as extending the reach of the designer
J plain the role of learning as extending the reach of the designer
. -.
into unknown environments, and we show how that role constrains agent design, favoring explicit
knowledge representation and reasoning. We treat robottes and vision not as independently defined
o presentation and reasoning. We treat robottes and vision not as independently dehned
problems, but as occurring in the service of achieving goals. We stress the importance of the task
.. -..
environment in determining the approDriate aZent desiZn.
u r propriate agent design.
Our primarV aim is to conveV the ideas that have emerged over the past fifty years of Al research
Primary aim is to convey the ideas that have emerged over the past fifty years of Al research
and the past two millenia of related work. We have tried to avoid excessive formality in the
presen.
tation of these ideas while retaining precision. Wherever appropriate, we have included pseudocode
o precision. Wherever appropriate, we have included pseudocode
algorithms to make the ideas concrete, our pseudocode is described briefly in Appendix B.
Implementations in several programming languages are available on the book's Web site, aima.cs.berkeley.edu.
This book is primarily intended for use in an undergraduate course or course sequence. It can
also be used in a graduate-level course (perhaps with the addition of some of the primary sources
.raduate-level course (perhaps with the addition of some of the primary sources
suggested in the bibliographical notes). Because of its comprehensive coverage and large number of
detailed algorithms, it is useful as a primarV reference volume for Al Zraduate students and
Drofeso primary reference volume for Al graduate students and
profes.,. 1.
sionals wishing to branch out beyond their own subfield. The only prerequisite is familiarity with
e to branch out beyond their own subfield. The only prerequisite is familiarity with
basic concepts of computer science (algorithms, data structures, complexity) at a sophomore level.
l puter science (algorithms, data structures, complexity) at a sophomore level.
Freshman calculus is useful for understanding neural networks and statistical learning in detail. Some
o
of the required mathematical background is supplied in Appendix A.
Overview of the book
The book is divided into eight parts. Part i, Artificial llltelligellce, offers a view of the Al enterprise
o
based around the idea of intelligent agents--systems that can decide what to do and then do it. Part
5 6 y U
II, Problem Solving, concentrates on methods for deciding what to do when one needs to think ahead
several steps--for example in navigating across a country or playing chess. Part ill, Kllowledge and
Reasoning, discusses ways to represent knowledge about the world--how it works, what it is currently
like, and what one's actions might do--and how to reason logically with that knowledge. Part IVy
Planning, then discusses how to use these reasoning methods to decide what to do, particularly by
.
constructinZ mans. Part V Uncertain Knowledge and Reasoning, is analogous to Parts ill and IV,
o plans. Part V, Uncertain Knowledge and Reasoning, is analogous to Parts ill and IV,
but it concentrates on reasoninZ and decision making in the presence of uncertainty) about the world,
o and decision making in the presence of uncertainty) about the world,
as might be faced, for example, by a system for medical diagnosis and treatment.
o
TOgether, Parts II--V describe that part of the intelligent agent responsible for reaching decisions.
Part VI, Learning, describes methods for generating the knowledge required by these decision-making
..
VII

... n c
vin Preface
,
components. Part Vll, Communicating, Perceiving, and Acting, describes ways in which an
intelligent agent can Derceive its environment so as to know what is going on, whether by vision, touch,
o o perceive its environment so as to know what is going on, whether by vision, touch,
hearing, or understanding language, and ways in which it can turn its plans into real actions, either as
1. 1 1,
robot motion or as natural language utterances. Finally, Part Vlll, Conclusions, analyzes the past and
euage utterances. Finally, Part Vlll, Conclusions, analyzes the past and
future of Al and the philosophical and ethical implications of artificial intelligence.
Changes from the first edition
Much has changed in Al since the publication of the first edition in 1995, and much has changed in this
e publication of the first edition in 1995, and much has changed in this
book. Every chaDter has been significantly rewritten to reflect the latest work in the field, to reinterpret
J pier has been significantly rewritten to reflect the latest work in the field, to reinterpret
1,,.
old work in a way that is more cohesive with new findings, and to improve the pedagogical flow of
J o.
., ~ 1 1 P. r, 1 1 1,,
ideas. Followers of Al should be encouraged that current techniques are much more practical than
.cd that current techniques are much more practical than
those of 1995; for example the planning algorithms in the first edition could generate plans of only
dozens of steps, while the algorithms in this edition scale up to tens of thousands of steps. Similar
, n.
orders-of-magnitude improvements are seen in probabilistic inference, language processing, and other
.nitude improvements are seen in probabilistic inference, language processing, and other
, n 1, al n 11., 1, 1.,,,
subfields. The following are the most notable changes in the book:
u
r n.
. In Part i, we acknowledge the historical contyibutions of contfol theory, game theory, economics,
, o J, came theory, economics,
,. m'.,,,
and neuroscience. This helDs set the tone for a more integrated coverage of these ideas in
ps set the tone for a more integrated coverage of these ideas in
,
subsequent chapters.
r n.
. In Part 11. online search algorithms are covered and a new chapter on constraint satisfaction has
, o pier on constraint satisfaction has
been added. The latter provides a natural connection to the material on logic.
r n
. In Part ill, proDositional logic, which was presented as a stepping-stone to first-order logic in
, propositional logic, which was presented as a stepping-stone to first-order logic in
the first edition, is now presented as a useful representation language in its own right, with fast
.
inference alZorithms and circuit-based agent designs. The chapters on first-order logic have
o e .ifs. foe chapters on first-order logic have
been reorganized to present the material more clearly and we have added the internet shopping
.anned to present the material more clearly and we have added the internet shopping
domain as an example.
x nc
. In Part IV, we include newer planning methods such as GRAPHPLAN and satisnability-based
, planning methods such as GRAPHPLAN and satisnability-based
,.,.n,1, .1. .1,.1 ., .11
planning, and we increase coverage of scheduling, conditional planning, hierarchical planning,
' 1.
and multiaZent DlanninZ
o planning.
x co
. In Part V, we have augmented the material on Bayesian networks with new algorithms, such
, o
., 1 1...,' x,
as variable elimination and Markov Chain Monte Carlo, and we have created a new chapter on
.
uncertain temporal reasoning, covering hidden Markov models, Kalman filters, and dynamic
Bavesian networks. The coverage of Markov decision processes is deepened, and we add
sect ac of Markov decision processes is deepened, and we add
see.
nons on game theory and mechanism design.
came theory and mechanism design.
x n,
. In Part VI, we tie together work in statistical, symbolic, and neural learning and add sections on
, o 5 symbolic, and neural learning and add sections on
boosting algorithms, the EM algorithm, instance-based learning, and kernel methods (support
o algorithms, the EM algorithm, instance-based learning, and kernel methods (suppoft
I
vector machines).
x n.'
. In Part Vll. coverage of language processing adds sections on discourse processing and
gram, o o
.,., 1,
mar induction, as well as a chapter on probabilistic language models, with applications to
information retrieval and machine translation. The coverage of robottes stresses the integration of
.'
uncertain sensor data, and the chapter on vision has updated material on object recognition.
. In Part Vlll. we introduce a section on the ethical implications of Al.
l we introduce a section on the ethical implications of Al.
Using this bOOk
The book has 27 chapters, each requiring about a week's worth of lectures, so working through the
,,,,.
whole book requires a two-semester sequence. Alternatively, a course can be tailored to suit the
interi luence. Alternatively, a course can be tailored to suit the
interests of the instructor and student. Through its broad coverage, the book can be used to support such
Preface ix
,,
courses, whether they are short, introductory undergraduate courses or specialized graduate courses on
J, introductory undergraduate courses or specialized graduate courses on
,,.
advanced topics. Sample sVllabi from the more than 600 universities and colleges that have adopted
pies. Sample syllabi from the more than 600 universities and colleges that have adopted
the first edition are shown on the Web at aima.cs.berkeleV'edu. along with suggestions to help you find
J, along with suggestions to help you find
.
a sequence appropriate to your needs.
ac h 1,. 1,.J,RF. n.... .C
1 he book includes 385 exercises. Exercises requiring significant programming are marked with
,,,.
a keyboard icon. These exercises can best be solved by taldng advantage of the code repository at
J DOard icon. These exercises can best be solved by taldng advantage of the code repository at
., 1 1,
aima.cs.berkelev.edu. Some of them are larZe enouZh to be considered term Droiects. A number of
:.cdu. Some of them are large enough to be considered term projects. A number of
.....
exercises require some investigation of the literature; these are marked with a book icon.
aam. k
l hroughout the book, important points are marked with a pointing icon. We have included an
o
..
extensive index of around 10,000 items to make it easV to find things in the book. Wherever a new
J LO find things in the book. Wherever a new
NEWTERM term is first defined, it is also marked in the margin.
Using the Web site
At the aima.cs.berkelevedu Web site von will find:
,.edu Web site you will find:
.'.
. implementations of the algorithms in the book in several programming languages,
plementations of the algorithms in the book in several programming languages,
1.
. a list of over 600 schools that have used the book, many with links to online course materials,
.
. an annotated list of over 800 links to sites around the web with useful Al content,
,,' 1.
. a chapter by chapter list of supplementary material and links,
...
. instructions on how to join a discussion group for the book,
,oin a discussion group for the book,
...
. Instructions on how to contact the authors with questions or comments.
luestions or comments,
..,
. instructions on how to report errors in the book. in the likelV event that some exist, and
port errors in the book, in the likely event that some exist, and
.
n
. copies of the figures in the book, along with slides and other material for instructors.
.
Acknowledgments
Jitendra Malik wrote most of Chanter 24 (on vision). Most of Chapter 25 (on robottes) was written
pier 24 (on vision). Most of Chapter 25 (on robottes) was written
by Sebastian Thrun in this edition and by John Canny in the tirst edition. Doug Edwards researched
j oebastian Thrun in this edition and by John Canny in the first edition. Doug Edwards researched
the historical notes for the first edition. Tim Huang, Mark Paskin, and Cynthia Bruyns helped with
formatting of the diagrams and algorithms. Alan Apt, Sondra Chavez, Tom Holm, Jake Warde, Irwin
u
Zucker, and Camille Trentacoste at Prentice Hall tried their best to keep us on schedule and made
,, r 1.
manV helpful suggestions on the book's design and content.
J nelpful suggestions on the book's design and content.
q f' 1 o
'tllort would like to thank his p"rents for their continued sUDan
itual would like to thank his parents for their continued suppoft and encouragement and his
.
n r Q, o
wife, Loy Sheflott, for her endless patience and boundless wisdom. He hopes that Gordon and Lucy
., 1,'.
will soon be reading this. RUGS (Russell's Unusual Group of Students) have been unusually helpful.
o, p of Students) have been unusually helpful.
Peter would like to thank his parents (TOrsten and Gerda) for getting him started, and his wife
(axis), children, and friends for encouraging and tolerating him through the long hours of writing and
longer hours of rewriting.
o o.
We are indebted to the librarians at BerkeleV, Stanford, MIT, and NASA, and to the developers
J, otanford, MIT, and NASA, and to the developers
n
of CiteSeer and Google, who have revolutionized the way we do research.
ale, who have revolutionized the way we do research.
We can't thank all the people who have used the book and made suggestions, but we would
.
like to acknowledge the especially helpful comments of Eyal Amir, Xizysztof Apt, Ellery Aziel, Jeff
Van Baalen, Brian Baker, Don Barkef, TOny Barrett, James Newton Bass, Don Beal, Howard Beck,
Wolf gang Bibel, John Binder, Larry Bookman, David R. Boxall, Gerhard Brewka, Selmer Bringsjord,
bang Bibel, John Binder, Larry Bookman, David R. Boxall, Gerhard Brewka, Selmer Bringsjord,
Caria BrodleV, Chris Brown, Wilhelm Burgef, Lauren Burka, Joao Cachopo, Murray Campbell,
Norm,, Mlhelm Burgef, Lauren Burka, Joao Cachopo, Murray Campbell,
Norman Carver, Emmanuel Castro, Anti Chakiavarthy, Dan Chisarick, Roberto Cipolla, David Cohen,
James Coleman, June Ann Comparini, Gabs Cottrell, Ernest Davis, Rina Dechter, TOm Dietterich,
parini, Gary Cottrell, Ernest Davis, Rina Dechter, TOm Dietterich,
Chuck DVer, Barbara Engelhardt, Doug Edwards, Kutluhan Erol, Oren Etzioni, Hana Filip, Douglas
j, Barbara Engelhardt, Doug Edwards, Kutluhan Erol, Oren Etzioni, Hana Filip, Douglas
x Preface
Fisher, Jeffrey Forbes, Ken Ford, John Fosler, Alex Franz, Bob Futrelle, Marek Galecki, Stefan
Germ I Ken Ford, John Fosler, Alex Franz, Bob Futrelle, Marek Galecki, Stefan
Gerberding, Stuart Gill, Sabine Glesner, Seth Golub, Gosta Grahne, Russ Greiner, Eric Grimson, Barbara
b, stuart Gill, Sabine Glesner, Seth Golub, Gosta Grahne, Russ Greiner, Eric Grimson, Barbara
Grosz, Larry Hall, Sieve Hanks, Othar Hansson, Ernst Heinz, Jim Hendler, Christoph Herrmann,
Vase,
(asant Honavar, Tim Huang, Seth Hutchinson, Joost Jacob, Magnus Johansson, Dan JurafSky, Leslie
Kaelbling, Ken Kanazawa, Surekha Kasibhatla, Simon Kasif, Henry Kautz, Gernot Kerschbaumer,
&, nelji Kanazawa, Surekha Kasibhatla, Simon Kasif, Henry Kautz, Gernot Kerschbaumer,
Richard KirbV, Kevin Knight, Sven Koenig, Daphne Koller, Rich Korf, James Kurien, John Lafferty,
.
Gus Larsson, John Lazzaro, Jon LeBlanc, Jason Leatherman, Frank Lee, Edward Lim, Pierre
Louds r 1, Q., 1 - -,, r.,
veaux, Don Loveland, Sridhar Mahadevan, Jim Martin, Andy Mayer, David McGrane, Jay
Mendel, n.'
sohn, Brian Milch, Sieve Minion, Vibhu Mittal, Leora Morgenstern, Stephen Muggleton, Kevin
Murk o x'. 1 Q,
phy, Ron Musick, Sung Myaeng, Lee Naish, Pandu Nayak, Bernhard Nebel, Stuart Nelson, XuanLong
Neuven. Illah Nourbakhsh, Sieve Omohundro, David Page, David Palmer, David Parkes, Ron Pare
ouyen, Illah Nourbakhsh, Sieve Omohundro, David Page, David Palmer, David Parkes, Ron Pare
Mark Paskin, Tony Passera, Michael Pazzani, Wim Pills, Ira Pohl, Martha Pollack, David Poole, Bruce
Porter, Malcolm Pradhan, Bill Pringle, Lorraine Prior, Greg Provan, William Rapaport, Philip Resnik,
Francesca Rossi, Jonathan Schaeffer, Richard Scherl, Lars Schuster, Sobed Shams, Stuart Shapiro,
plro,
Jude Shaviik, Satinder Singh, Daniel Sleator, David Smith, Bryan So, Robert Sproull, Lynn Stein,
Larry SteDhens. Andreas Stolcke. Paul Stradling, Devika Subramanian, Rich Sutton, Jonathan Tash,
J phens, Andreas Stolcke, Paul Stradling, Devika Subramanian, Rich Sutton, Jonathan Tash,
Austin Tate, Michael Thielscher, William Thompson, Sebastian Thrun, Eric Tiedemann, Mark
TOrrance, Randall Upham, Paul Utgoff, Peter van Beek, Hal Varian, Sunil Vemuri, Jim Waldo, Bonnie
Webber, Dan Weld, Michael Wellman, Michael Dean White, Kamin Whitehouse, Brian Williams,
David WOlf e, Bill Woods, Alden Wright, Richard Yen, Weixiong Zhang, Shlomo Zilberstein, and the
.., 1, n
anonymous reviewers Drovided bV Prentice Hall.
J provided by Prentice Hall.
About the Cover
The cover image was designed by the authors and executed by Lisa Marie Sardegna and Maryann
e .ned by the authors and executed by Lisa Marie Sardegna and Maryann
Simmons using SGI Inventor and Adobe PhotoshoD. The cover deDicts the following items
u p. she cover depicts the following items
from the historV of Al f
J of Al f
1. Aristotle's planning algorithm from De Motu Animalium (c. 400 B .C .).
ry ac n T 1 l,
2. Ramon Lull's concept generator from Ars Magna (c. 1300 A.D.).
ac
3. Charles Babbage's Difference Engine, a prototype for the hrst universal computer (1848).
e &me, a prototype for the hrst universal computer (1848).
4. Gottlob Frege's notation for first-order logic (1789).
6 olc (1789).
5. Lewis Carroll's diagrams for logical reasoning (1886).
o o o L1886).
6. Sewall Wright's probabilistic network notation (1921).
o probabilistic network notation (1921 ).
~', ~.
7. Alan Turing (1912--1954).
5 (1912~1954).
8. ShakeV the Robot (1969--1973).
J
9. A modern diagnostic expert system (1993).
&nostic expert system (1993).

About the Authors
Stuart Russell was born in 1962 in Portsmouth, England. He received his B.A. with first-class
hon.'.
ours in physics from Oxford University in 1982, and his Ph.D. in computer science from Stanford in
'
1986. He then joined the faculty of the University of California at Berkeley, where he is a professor
J J of the University of California at Berkeley, where he is a professor
of computer science, director of the Center for intelligent Systems, and holder of the Sixth--Zadeh
.
Chair in Engineering. In 1990, he received the Presidential YOung investigator Award of the National
.ineenng. In 1990, he received the Presidential YOung investigator Award of the National
Science Foundation, and in 1995 he was cowinner of the Computers and Thought AWard. He was a
1996 Miller Professor of the University of California and was appointed to a Chancellor's
ProfessorJ ppointed to a Chancellor's
Professorship in 2000. In 1998, he gave the Forsythe Memorial Lectures at Stanford University. He is a Fellow
and former Executive Council member of the American Association for Artificial intelligence. He has
published over 100 papers on a wide range of topics in artificial intelligence. His other books include
The Use of Knowledge in Analogy and induction and (with Eric Wefald) Do the Right Thing: Studies
u Knowledge in Analogy and induction and (with Eric Wefald) Do the Right Thing: Studies
in Limited Rationalics
.
Peter Norvig is director of Search Quality at Google, Inc. He is a Fellow and Executive Council
member of the American Association for Artificial intelligence. Previously, he was head of the
Computational Sciences Division at NASA Ames Research Center, where he oversaw NASA's research
and development in artificial intelligence and robottes. Before that he served as chief scientist at
June
glee, where he helped develop one of the first internet information extraction services, and as a senior
scientist at Sun MicrosVstems Laboratories working on intelligent information retrieval. He received
y items Laboratories working on intelligent information retrieval. He received
a B.S. in applied mathematics from Brown University and a Ph.D. in computer science from the
UniI
versitV of California at Berkeley. He has been a Drofessor at the University of Southern California and
j j. He has been a professor at the University of Southern California and
a research faculty member at BerkeleV' He has over 50 publications in comDuter science includinZ
j Inember at Berkeley. He has over 50 publications in computer science including
the books Paradigms ofAl Programming: Case Studies in COmmon LisP, Verbmobil: A Translation
Systemfor Face-to-Face Dialog, and intelligent HelP Systems for UNIX.
.
猜您喜欢

读书导航