CUDA by Example
5%
off

CUDA by Example : An Introduction to General-Purpose GPU Programming

3.97 (119 ratings by Goodreads)
By (author)  , By (author) 

Free delivery worldwide

Available. Dispatched from the UK in 2 business days
When will my order arrive?

Not expected to be delivered to the United States by Christmas Not expected to be delivered to the United States by Christmas

Description

"This book is required reading for anyone working with accelerator-based computing systems."-From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National LaboratoryCUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required-just the ability to program in a modestly extended version of C. CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You'll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance. Major topics covered includeParallel programmingThread cooperationConstant memory and eventsTexture memoryGraphics interoperabilityAtomicsStreamsCUDA C on multiple GPUsAdvanced atomicsAdditional CUDA resourcesAll the CUDA software tools you'll need are freely available for download from NVIDIA.http://developer.nvidia.com/object/cuda-by-example.htmlshow more

Product details

  • Paperback | 312 pages
  • 185.42 x 228.6 x 17.78mm | 566.99g
  • Pearson Education (US)
  • Addison-Wesley Educational Publishers Inc
  • New Jersey, United States
  • English
  • 0131387685
  • 9780131387683
  • 126,798

Back cover copy

""This book is required reading for anyone working with accelerator-based computing systems.""-From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National LaboratoryCUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required-just the ability to program in a modestly extended version of C. "CUDA by Example, " written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You'll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance. Major topics covered includeParallel programmingThread cooperationConstant memory and eventsTexture memoryGraphics interoperabilityAtomicsStreamsCUDA C on multiple GPUsAdvanced atomicsAdditional CUDA resourcesAll the CUDA software tools you'll need are freely available for download from NVIDIA.http: //developer.nvidia.com/object/cuda-by-example.htmlshow more

About Jason Sanders

Jason Sanders is a senior software engineer in the CUDA Platform group at NVIDIA. While at NVIDIA, he helped develop early releases of CUDA system software and contributed to the OpenCL 1.0 Specification, an industry standard for heterogeneous computing. Jason received his master's degree in computer science from the University of California Berkeley where he published research in GPU computing, and he holds a bachelor's degree in electrical engineering from Princeton University. Prior to joining NVIDIA, he previously held positions at ATI Technologies, Apple, and Novell. When he's not writing books, Jason is typically working out, playing soccer, or shooting photos. Edward Kandrot is a senior software engineer on the CUDA Algorithms team at NVIDIA. He has more than twenty years of industry experience focused on optimizing code and improving performance, including for Photoshop and Mozilla. Kandrot has worked for Adobe, Microsoft, and Google, and he has been a consultant at many companies, including Apple and Autodesk. When not coding, he can be found playing World of Warcraft or visiting Las Vegas for the amazing food.show more

Table of contents

Foreword xiiiPreface xvAcknowledgments xviiAbout the Authors xix Chapter 1: Why CUDA? Why Now? 11.1 Chapter Objectives 21.2 The Age of Parallel Processing 21.3 The Rise of GPU Computing 41.4 CUDA 61.5 Applications of CUDA 81.6 Chapter Review 11 Chapter 2: Getting Started 132.1 Chapter Objectives 142.2 Development Environment 142.3 Chapter Review 19 Chapter 3: Introduction to CUDA C 213.1 Chapter Objectives 223.2 A First Program 223.3 Querying Devices 273.4 Using Device Properties 333.5 Chapter Review 35 Chapter 4: Parallel Programming in CUDA C 374.1 Chapter Objectives 384.2 CUDA Parallel Programming 384.3 Chapter Review 57 Chapter 5: Thread Cooperation 595.1 Chapter Objectives 605.2 Splitting Parallel Blocks 605.3 Shared Memory and Synchronization 755.4 Chapter Review 94 Chapter 6: Constant Memory and Events 956.1 Chapter Objectives 966.2 Constant Memory 966.3 Measuring Performance with Events 1086.4 Chapter Review 114 Chapter 7: Texture Memory 1157.1 Chapter Objectives 1167.2 Texture Memory Overview 1167.3 Simulating Heat Transfer 1177.4 Chapter Review 137 Chapter 8: Graphics Interoperability 1398.1 Chapter Objectives 1408.2 Graphics Interoperation 1408.3 GPU Ripple with Graphics Interoperability 1478.4 Heat Transfer with Graphics Interop 1548.5 DirectX Interoperability 1608.6 Chapter Review 161 Chapter 9: Atomics 1639.1 Chapter Objectives 1649.2 Compute Capability 1649.3 Atomic Operations Overview 1689.4 Computing Histograms 1709.5 Chapter Review 183 Chapter 10: Streams 18510.1 Chapter Objectives 18610.2 Page-Locked Host Memory 18610.3 CUDA Streams 19210.4 Using a Single CUDA Stream 19210.5 Using Multiple CUDA Streams 19810.6 GPU Work Scheduling 20510.7 Using Multiple CUDA Streams Effectively 20810.8 Chapter Review 211 Chapter 11: CUDA C on Multiple GPUs 21311.1 Chapter Objectives 21411.2 Zero-Copy Host Memory 21411.3 Using Multiple GPUs 22411.4 Portable Pinned Memory 23011.5 Chapter Review 235 Chapter 12: The Final Countdown 23712.1 Chapter Objectives 23812.2 CUDA Tools 23812.3 Written Resources 24412.4 Code Resources 24612.5 Chapter Review 248 Appendix A: Advanced Atomics 249A.1 Dot Product Revisited 250A.2 Implementing a Hash Table 258A.3 Appendix Review 277 Index 279show more

Rating details

119 ratings
3.97 out of 5 stars
5 31% (37)
4 39% (46)
3 28% (33)
2 2% (2)
1 1% (1)
Book ratings by Goodreads
Goodreads is the world's largest site for readers with over 50 million reviews. We're featuring millions of their reader ratings on our book pages to help you find your new favourite book. Close X