书籍详情
机器学习:局部和整体的学习(英文版)
作者:黄开竹,杨海钦,金国庆,吕荣骢
出版社:浙江大学出版社
出版时间:2008-04-01
ISBN:9787308058315
定价:¥70.00
购买这本书可以去
内容简介
Machine learning - modeling data locally and globally presents a novel and unified theory that tries to seamlessly integrate different algorithms。 specifically, the book distinguishes the inner nature of machine learning algorithms as either “local learning”or “global learning。”this theory not only connects previous machine learning methods, or serves as roadmap in various models, but more importantly it also motivates a theory that can learn from data both locally and globally。 this would help the researchers gain a deeper insight and comprehensive understanding of the techniques in this field。 the book reviews current topics,new theories and applications。kaizhu huang was a researcher at the fujitsu research and development center and is currently a research fellow in the chinese university of hong kong。 haiqin yang leads the image processing group at hisilicon technologies。 irwin king and michael r。 lyu are professors at the computer science and engineering department of the chinese university of hong kong。
作者简介
暂缺《机器学习:局部和整体的学习(英文版)》作者简介
目录
1 introduction
1.1 learning and global modeling
1.2 learning and local modeling
1.3 hybrid learning
1.4 major contributions
1.5 scope
1.6 book 0rganization
references
2 global learning vs.local learning
2.1 problem definition
2.2 global learning
2.2.1 generative learning
2.2.2 non—parametric learning
2.2.3 the minimum error minimax probability machine
2.3 local learning
2.4 hybrid learning
2.5 maxi—min margin machine
references
3 a general global learning modeh mempm
3.1 marshall and 0lkin theory
. 3.2 minimum error minimax probability decision hyperplane
3.2.1 problem definition
3.2.2 interpretation
3.2.3 special case for biased classifications
3.2.4 solving the mempm optimization problem
3.2.5 when the worst—case bayes optimal hyperplane becomes the true one
3.2.6 geometrical interdretation
3.3 robust version
3.4 kernelization
3.4.1 kernelization theory for bmpm
3.4.2 notations in kernelization theorem of bmpm
3.4.3 kernelization results
3.5 experiments
3.5.1 model illustration on a synthetic dataset
3.5.2 evaluations on benchmark datasets
3.5.3 evaluations of bmpm on heart.disease dataset
3.6 how tight is the bound
3.7 on the concavity of mempm
3.8 limitations and future work
3.9 summary
referencese
4 learning locally and globally:maxi-min margin machine
4.1 maxi—min margin machine
4.1.1 separable case
4.1.2 connections with other models
4.1.3 nonseparable case
4.1.4 further connection with minimum error minimax probability machine
4.2 bound on the error rate
4.3 reduction
4.4 kernelization
4.4.1 foundation of kernelization for m4
4.4.2 kernelization result
4.5 experiments
4.5.1 evaluations on three synthetic toy datasets
4.5.2 evaluations on benchmark datasets
4.6 discussions and future work
4.7 summary
references
5 extensionⅰ:bmpm for imbalanced learning
5.1 introduction to imbalanced learning
5.2 biased minimax probability machine
5.3 learning from imbalanced data by using bmpm
5.3.1 four criteria to evaluate learning from imbalanced data
5.3.2 bmpm for maximizing the sum of the accuracies
5.3.3 bmpm for roc analysis
6 extensionⅱ :a regression model from m4
7 extensionⅲ:variational margin settings within local data
8 conclusion and future work
references
index
1.1 learning and global modeling
1.2 learning and local modeling
1.3 hybrid learning
1.4 major contributions
1.5 scope
1.6 book 0rganization
references
2 global learning vs.local learning
2.1 problem definition
2.2 global learning
2.2.1 generative learning
2.2.2 non—parametric learning
2.2.3 the minimum error minimax probability machine
2.3 local learning
2.4 hybrid learning
2.5 maxi—min margin machine
references
3 a general global learning modeh mempm
3.1 marshall and 0lkin theory
. 3.2 minimum error minimax probability decision hyperplane
3.2.1 problem definition
3.2.2 interpretation
3.2.3 special case for biased classifications
3.2.4 solving the mempm optimization problem
3.2.5 when the worst—case bayes optimal hyperplane becomes the true one
3.2.6 geometrical interdretation
3.3 robust version
3.4 kernelization
3.4.1 kernelization theory for bmpm
3.4.2 notations in kernelization theorem of bmpm
3.4.3 kernelization results
3.5 experiments
3.5.1 model illustration on a synthetic dataset
3.5.2 evaluations on benchmark datasets
3.5.3 evaluations of bmpm on heart.disease dataset
3.6 how tight is the bound
3.7 on the concavity of mempm
3.8 limitations and future work
3.9 summary
referencese
4 learning locally and globally:maxi-min margin machine
4.1 maxi—min margin machine
4.1.1 separable case
4.1.2 connections with other models
4.1.3 nonseparable case
4.1.4 further connection with minimum error minimax probability machine
4.2 bound on the error rate
4.3 reduction
4.4 kernelization
4.4.1 foundation of kernelization for m4
4.4.2 kernelization result
4.5 experiments
4.5.1 evaluations on three synthetic toy datasets
4.5.2 evaluations on benchmark datasets
4.6 discussions and future work
4.7 summary
references
5 extensionⅰ:bmpm for imbalanced learning
5.1 introduction to imbalanced learning
5.2 biased minimax probability machine
5.3 learning from imbalanced data by using bmpm
5.3.1 four criteria to evaluate learning from imbalanced data
5.3.2 bmpm for maximizing the sum of the accuracies
5.3.3 bmpm for roc analysis
6 extensionⅱ :a regression model from m4
7 extensionⅲ:variational margin settings within local data
8 conclusion and future work
references
index
猜您喜欢