书籍详情
CUDA范例精解:通用GPU编程(影印版)
作者:山德尔(Jason Sanders),康洛特(Edward Kandrot) 等著
出版社:清华大学出版社
出版时间:2010-10-01
ISBN:9787302239956
定价:¥39.00
购买这本书可以去
内容简介
CUDA是设计用于帮助开发并行程序的计算体系结构。通过与广泛的软件平台相结合,cuda体系结构使程序员可以充分利用图形处理单元(gpu)的强大能力构建高性能的应用程序。当然,gpu已经在很长时间内用于实现复杂的图形和游戏应用程序。现在,cuda将这种极具价值的资源带给在其他领域内从事应用程序开发的程序员,包括科学、工程和财务领域。这些程序员完全不需要了解图形编程的相关知识,而只要能够采用适当扩展的c语言版本进行编程即可。《CUDA范例精解:通用GPU编程(影印版)》由cuda软件平台团队中的两位资深成员编写而成,他们向程序员展示了如何使用这种新的技术,并且通过大量可以运行的示例介绍了cuda开发的每个领域。在简要介绍cuda平台和体系结构以及快速指导cudac之后,《CUDA范例精解:通用GPU编程(影印版)》详细介绍了与每个关键的cuda功能相关的技术,以及如何权衡使用这些功能。通过阅读《CUDA范例精解:通用GPU编程(影印版)》,您将掌握使用每个cudac扩展的时机以及编写性能极为优越的cuda软件的方式。
作者简介
作者:(美国)山德尔(Jason Sanders) (美国)康洛特(Edward Kandrot)山德尔(Jason Sanders)是NVIDIA公司CUDA平台团队中的资深软件工程师,他协助开发了早期版本的CUDA系统软件,并且帮助制定了作为异构计算的行业标准的OpenCL 1.0规范。Jason也在ATI Technologies、Apple和Novell担任相关职务。康洛特(Edward Kandrot)是NVIDIA公司CUDA算法团队中的资深软件工程师,他拥有超过20年的行业经验,主要为Adobe、Microsoft、Google和Autodesk优化代码性能。
目录
foreword
preface
acknowledgments
about the authors
1 why cuda ? why now?
1.1 chapter objectives
1.2 the age of parau. el. processing
1.3 the rise of gpu computing
1.4 cuda
1.5 applications of cuda
1.6 chapter review
2 getting started
3.1 chapter objectives
2.2 deve!.opment environment
2.3 chapter review
3 introduction to cuda c
3.1 chapter objectives
3.2 a first program
3.3 querying devices
3.4 using device properties
3.5 chapter review
4 parallel programming in cuda c
4.1 chapter objectives
4.2 cuda para[tel programming
4.3 chapter review
5 thread cooperation
5.1 chapter objectives
5.2 splitting parallel blocks
5.3 shared memory and synchronization
5.4 chapter review
6 constant memory and events
6.1 chapter objectives
6.2 constant memory
6.3 measuring performance with events
6.4 chapter review
7 texture memory
7.1 chapter objectives
7.2 texture memory overview
7.3 simulating heat transfer
7.4 chapter review
8 graphics interoperability
8.1 chapter objectives
8.2 graphics interoperation
8.3 gpu ripple with graphics interoperability
8.4 heat transfer with graphics interop
8.5 directx interoperability
8.6 chapter' review
9 atomics
9.1 chapter objectives
9.2 compute capability
9.3 atomic operations overview
9.4computing histograms
9.5 chapter review
10 streams
10.1 chapter objectives
10.2 page-locked host memory
10.3 cuda streams
10.4 using a single cuda stream
10.5 using multipte cuda streams
10.6 gpu work scheduling
10.7 using multiple cuda streams effectively
10.8 chapter review
11 cuda c on multiple gpus
11.1 chapter objectives
11.2 zero-copy host memory
11.3 using multiple gpus
11.4 portable pinned memory
11.5 chapter review
12 the final countdown
12.1 chapter objectives
12.2 cuda tools
12.3 written resources
12.4 code resources
12.5 chapter review
a advanced atomics
a.1 dot product revisited
a.2 impl. ementing a hash tabte
a.3 appendix review
index
preface
acknowledgments
about the authors
1 why cuda ? why now?
1.1 chapter objectives
1.2 the age of parau. el. processing
1.3 the rise of gpu computing
1.4 cuda
1.5 applications of cuda
1.6 chapter review
2 getting started
3.1 chapter objectives
2.2 deve!.opment environment
2.3 chapter review
3 introduction to cuda c
3.1 chapter objectives
3.2 a first program
3.3 querying devices
3.4 using device properties
3.5 chapter review
4 parallel programming in cuda c
4.1 chapter objectives
4.2 cuda para[tel programming
4.3 chapter review
5 thread cooperation
5.1 chapter objectives
5.2 splitting parallel blocks
5.3 shared memory and synchronization
5.4 chapter review
6 constant memory and events
6.1 chapter objectives
6.2 constant memory
6.3 measuring performance with events
6.4 chapter review
7 texture memory
7.1 chapter objectives
7.2 texture memory overview
7.3 simulating heat transfer
7.4 chapter review
8 graphics interoperability
8.1 chapter objectives
8.2 graphics interoperation
8.3 gpu ripple with graphics interoperability
8.4 heat transfer with graphics interop
8.5 directx interoperability
8.6 chapter' review
9 atomics
9.1 chapter objectives
9.2 compute capability
9.3 atomic operations overview
9.4computing histograms
9.5 chapter review
10 streams
10.1 chapter objectives
10.2 page-locked host memory
10.3 cuda streams
10.4 using a single cuda stream
10.5 using multipte cuda streams
10.6 gpu work scheduling
10.7 using multiple cuda streams effectively
10.8 chapter review
11 cuda c on multiple gpus
11.1 chapter objectives
11.2 zero-copy host memory
11.3 using multiple gpus
11.4 portable pinned memory
11.5 chapter review
12 the final countdown
12.1 chapter objectives
12.2 cuda tools
12.3 written resources
12.4 code resources
12.5 chapter review
a advanced atomics
a.1 dot product revisited
a.2 impl. ementing a hash tabte
a.3 appendix review
index
猜您喜欢